Generates blittable object structs (GC Free) from a definition file '.blit.json' .
$ dotnet add package Extend0.BlittableAdapter.GeneratorExtend0 is a small .NET utility library that provides three main building blocks:
The library is designed for services that need a single owner across multiple processes while still allowing simple in-process use when IPC is unnecessary, and for tools that need predictable, allocation-free metadata persistence.
Add project references from your solution. For example, from the repository root:
dotnet add <YourProject>.csproj reference Extend0/Extend0.csproj
# Optional source generators
dotnet add <YourProject>.csproj reference Extend0.MetadataEntry.Generator/Extend0.MetadataEntry.Generator.csproj
dotnet add <YourProject>.csproj reference Extend0.BlittableAdapter.Generator/Extend0.BlittableAdapter.Generator.csproj
This approach is recommended when you want to:
You can also use the GitHub Releases section of the repository:
Source generators are published as separate artifacts when applicable.
Extend0 is available through the NuGet registry:
dotnet add package Extend0
NuGet versions are aligned with GitHub releases.
The library currently targets:
Dependencies are minimal:
The Extend0.Lifecycle.CrossProcess namespace exposes the components needed to host a single service instance across processes:
CrossProcessSingleton<TService> wires up ownership, IPC hosting, and the static Service accessor.CrossProcessServiceBase<TService> provides diagnostics helpers (PingAsync, GetServiceInfoAsync, CanConnectAsync) and hosting utilities for named-pipe servers.CrossProcessSingletonOptions controls whether you run in-process or cross-process, which pipe name to use, and how aggressively to overwrite existing owners.The snippet below shows a simple clock service that runs as a cross-process singleton. The first process to start becomes the owner (hosting the named-pipe server); subsequent processes transparently act as clients through the generated proxy.
using Extend0.Lifecycle.CrossProcess;
using Microsoft.Extensions.Logging;
var closing = false;
using var loggerFactory = LoggerFactory.Create(builder =>
{
builder
.SetMinimumLevel(LogLevel.Debug)
.AddProvider(new SomeLoggerProvider...());
});
var loggerInstance = loggerFactory.CreateLogger<Clock>();
var _instance = new ClockSingleton(loggerInstance);
Console.CancelKeyPress += (_, __) => closing = true;
while (!closing)
{
Console.Clear();
try
{
Console.WriteLine(ClockSingleton.IsOwner
? $"[Owner {ClockSingleton.Service.ContractName}]"
: $"[Client {ClockSingleton.Service.ContractName}]");
if (!ClockSingleton.IsOwner)
{
Console.WriteLine(await ClockSingleton.Service.PingAsync());
//Console.WriteLine(await ClockSingleton.Service.GetServiceInfoAsync());
Console.WriteLine(await ClockSingleton.Service.NowIsoAsync());
}
}
catch (RemoteInvocationException rEx) when (rEx.HResult == 426)
{
// Swallow upgrade-in-progress errors.
}
await Task.Delay(1000);
}
public interface IClock : ICrossProcessService
{
Task<string> NowIsoAsync();
}
public sealed class Clock : CrossProcessServiceBase<IClock>, IClock
{
protected override string? PipeName => "Extend0.Clock";
public Task<string> NowIsoAsync() => Task.FromResult(DateTimeOffset.UtcNow.ToString("O"));
}
public sealed class ClockSingleton(ILogger logger) : CrossProcessSingleton<IClock>(
() => new Clock(),
new()
{
Mode = SingletonMode.CrossProcess,
CrossProcessName = "Extend0.Clock",
CrossProcessServer = ".",
CrossProcessConnectTimeoutMs = 5000,
Overwrite = true,
Logger = logger
})
{
static ClockSingleton()
{
RpcDispatchProxy<IClock>.UpgradeHandler = static async ex =>
{
var loggerFactory = new LoggerFactory();
var logger = loggerFactory.CreateLogger<Clock>();
try
{
// Re-create singleton when the owner restarts.
_ = new ClockSingleton(logger);
Console.WriteLine("[Upgrade] Recreated ClockSingleton. IsOwner = {0}", IsOwner);
await Task.Yield();
return true;
}
catch (Exception e)
{
Console.WriteLine("[Upgrade] Failed to recreate ClockSingleton: {0}", e);
return false;
}
};
}
}
Key behaviors to remember:
ClockSingleton initializes the static ClockSingleton.Service property. Clients and owners use the same API.SingletonMode.CrossProcess enforces a single owner across processes; switch to SingletonMode.InProcess to bypass IPC for tests.CrossProcessSingletonOptions.Overwrite controls whether a new instance replaces an existing owner (useful for upgrades or crash recovery).CrossProcessServiceBase implements the contract helpers (PingAsync, GetServiceInfoAsync, CanConnectAsync) so your service only needs to provide domain methods.Extend0’s metadata layer provides fixed-size, allocation-free key/value storage backed by memory-mapped files. Two source generators help you define the binary shapes that live in these tables:
Extend0.MetadataEntry.Generator reads [assembly: GenerateMetadataEntry(keyBytes, valueBytes)] attributes and emits blittable MetadataEntry{Key}x{Value} structs plus a typed MetadataCell wrapper. The repository declares a catalog of common shapes in Metadata/Generator.attributes.cs, covering small tag-style keys up to larger “chubby” entries.Extend0.BlittableAdapter.Generator consumes *.blit.json files and generates blittable structs with inline UTF-8/binary buffers. These adapters are intended for use as typed value columns inside metadata tables.Declare one or more entry shapes (or use the defaults in Generator.attributes.cs):
[assembly: Extend0.Metadata.CodeGen.GenerateMetadataEntry(64, 512)]
Describe a table layout using TableSpec, mixing entry cells and blittable payloads:
using Extend0.Metadata;
using Extend0.Metadata.Schema;
var spec = new TableSpec(
Name: "Settings",
MapPath: "./data/settings.meta",
Columns: new[]
{
// Key/value entry column with 64-byte keys and 512-byte values
TableSpec.Column<MetadataEntry64x512>("Entries", capacity: 512),
// Blittable payload column generated from a .blit.json file
TableSpec.Column<MyBlittablePayload>("Payload", capacity: 512)
});
spec.SaveToFile("./data/settings.spec.json");
Register and use the table through MetaDBManager:
using Extend0.Metadata;
var manager = new MetaDBManager(logger: null);
var tableId = manager.RegisterTable(spec, createNow: true);
if (manager.TryGetCreated(spec.Name, out var table) &&
table.TryGetCell("Entries", row: 0, out var cell))
{
// `cell` is a view over the fixed-size buffer described by the generated entry type.
}
This workflow keeps your on-disk layout deterministic while letting Roslyn generate the unsafe structs needed to interact with the metadata store safely.
Concurrency and Access Failures: MetaDB prevents concurrent writers by opening metadata files with
FileShare.Read, which blocks other processes from writing to the same file simultaneously. When a second writer attempts to register or open a table that’s already mapped elsewhere, anIOExceptionwill be thrown. This is expected and must be handled by the caller. Public APIs likeRegisterTable,Open, or any operation that triggers mapping (e.g.,createNow: true) may surface these exceptions. Consumers should implement retry logic with backoff to wait for exclusive access. If multiple processes need to write to the same file, they should follow an ephemeral access pattern — acquire the table, perform the operation, and explicitly call.Dispose()on theMetadataTablewhen done. This releases the memory-mapped file and allows other writers to proceed. While disposed tables remain tracked internally, this has no side effects unless registration is done repeatedly with varying schemas or identifiers. In the future, I plan to provide aCloseTablemethod to fully unregister and dispose a table, clearing it from internal indexes for long-lived scenarios with dynamic table lifecycles.
This section documents the current micro-benchmarks for MetaDBManager and its columnar storage engine.
All numbers below come from MetaDBManagerBench (BenchmarkDotNet) and are meant both as a sanity check and as a rough “performance contract” for future work.
In benchmarks with large, sequential column operations, MetaDBManager becomes memory-bandwidth bound and behaves similarly to a linear memcpy (O(n)), saturating L1, L2 and L3 caches. So throughput is effectively limited by your hardware.
BenchmarkDotNet v0.15.4
OS : Windows 11 (24H2)
CPU : AMD Ryzen 7 4800H (8C / 16T @ 2.90 GHz)
Runtime : .NET 9.0.8 (RyuJIT x64-v3)
Job config:
- Runtime=.NET 9.0
- LaunchCount = 1
- WarmupCount = 1
- IterationCount= 5
Common parameters across most benchmarks:
Cols : 7RowsPerCol : 24 or 2048 (small vs large tables)KeySize : 16 bytesValueSize : 64 or 256 bytesOps : 10,000 logical operations per benchmarkChildPoolSize : 16RefsPerBatch : 1, 4 or 16 (for ref-related tests)The suite is organized into high-level categories:
Copy
Copy_A_to_B_Column0_AllRowsCopy_InPlace_TableA_Col0_to_Col1_AllRowsFill
Fill_Column0_TableAFill_Column0_TableBFill_Typed_Small16Fill_Typed_ExactFill_Raw_WriterRefs
EnsureRefVec_*LinkRef_*EnumerateRefs_CountSumRegister
RegisterTable_Lazy_NoPersistRegisterTable_Eager_DisposeAndDeleteRepresentative results (KeySize=16, ValueSize=64):
| Scenario | Rows/Col | ValueSize | Mean (ns) | GB/s (rd+wr) |
|---|---|---|---|---|
Copy_A_to_B_Column0_AllRows | 24 | 64 | ~104–107 | ~28–30 |
Copy_InPlace_TableA_Col0_to_Col1_* | 24 | 64 | ~104–107 | ~28–29 |
Copy_A_to_B_Column0_AllRows | 2048 | 64 | ~3,709–3,727 | ~70–71 |
Copy_InPlace_TableA_Col0_to_Col1_* | 2048 | 64 | ~3,709–3,739 | ~70–71 |
Key takeaways:
Col0 → Col1 in the same table) are within a few percent of inter-table copies, which is good: the in-place path does not introduce hidden overhead.Representative results for RowsPerCol=24, KeySize=16:
| Method | ValueSize | Mean (ns) | Relative to Fill_Column0_TableA |
|---|---|---|---|
Fill_Column0_TableA | 64 | ~439–457 | 1.00× |
Fill_Column0_TableB | 64 | ~420–450 | ~0.95–1.03× |
Fill_Typed_Small16 | 64 | ~195–202 | ~0.44–0.46× |
Fill_Raw_Writer | 64 | ~410–417 | ~0.93–0.95× |
Fill_Typed_Exact | 64 | ~272–281 | ~0.62–0.65× |
For ValueSize=256 the patterns are similar:
Fill_Typed_Small16 stays around ~200 ns, outperforming the generic column-fill.Fill_Typed_Exact is a bit slower for large values but still competitive vs the baseline column fill.High-level conclusions:
Fill_Typed_*) are consistently the fastest way to push data into the engine, especially for smaller value sizes.Fill_Column0_TableA/B) is competitive, but you pay around 2× versus the specialized typed path on small structs.The ref-related benchmarks exercise the machinery that manages reference vectors and bulk linking.
Representative small-table numbers (RowsPerCol=24, ValueSize=64):
| Method | Mean (ns) | Notes |
|---|---|---|
EnsureRefVec_Cold_InitOnce | ~2,8 µs | First allocation / cold path |
EnsureRefVec_Idempotent_Only | ~2,7–2,8 µs | “Already initialized” path (idempotent) |
EnsureRefVec_And_LinkRef_FromPool | ~3,8–4,2 µs | Ensure + link in one go |
LinkRef_Bulk_PerRow | ~2,7 ns | Only the linking work (ref pool pre-primed) |
On large tables (RowsPerCol=2048):
ValueSize and batch params.LinkRef_Bulk_PerRow) stays noticeably faster, roughly 0.7–0.8× of the ensure+link combo.Design-wise:
EnsureRefVec defensively without paying a big penalty when it is already initialized.The Register benchmarks focus on table lifecycle:
RegisterTable_Lazy_NoPersist
RegisterTable_Eager_DisposeAndDelete
Gen0 activity.In other words:
Fill_Typed_*) are the preferred API for hot paths, halving the cost versus generic column fills in typical configurations.These numbers are my current “performance budget”. Any future changes to MetaDBManager should be validated against this suite to avoid silent regressions.
TaskExtensions.Forget lets you safely execute background tasks while:
ILogger).measureDuration is enabled.Example:
SomeAsyncOperation()
.Forget(_logger,
onExceptionMessage: "Background operation failed",
onExceptionAction: ex => Telemetry.TrackException(ex),
finallyAction: () => _metrics.Increment("background.done"),
measureDuration: true);
This pattern keeps fire-and-forget work from surfacing unobserved exceptions and provides consistent diagnostics hooks.
From the repository root:
dotnet build
The library contains no unit tests by default; add your own in consumer solutions as needed.