Core query primitives (merge enumerators, LINQ MVP scaffolding) used by Shardis provider packages.
$ dotnet add package Shardis.QueryPrimitives and abstractions for cross-shard query execution in Shardis: streaming enumerators, LINQ helpers, executor interfaces, and ergonomic client helpers.
dotnet add package Shardis.Query --version 0.1.*
IAsyncEnumerable<T>.IShardQueryExecutor — low-level executor abstraction.IShardQueryClient / ShardQueryClient — ergonomic entrypoint (DI friendly) providing Query<T>() and inline composition overloads.Query<T>(), Query<T,TResult>(...) for direct bootstrap without ShardQuery.For<T>().FirstOrDefaultAsync, AnyAsync, CountAsync (client-side aggregation helpers).FailFastFailureStrategy, BestEffortFailureStrategy) with DI decoration helper in EF Core provider.// Using the ergonomic client (recommended)
var client = provider.GetRequiredService<IShardQueryClient>();
var adults = client.Query<Person, string>(p => p.Age >= 18, p => p.Name);
var first = await adults.FirstOrDefaultAsync();
var count = await adults.CountAsync();
// Or directly from an executor
var exec = provider.GetRequiredService<IShardQueryExecutor>();
var q = exec.Query<Person>()
.Where(p => p.IsActive)
.Select(p => new { p.Id, p.Name });
var any = await q.AnyAsync();
CreateOrdered preview).services.DecorateShardQueryFailureStrategy(BestEffortFailureStrategy.Instance)).EfCoreExecutionOptions.ChannelCapacity).All async operators accept a CancellationToken propagated to underlying providers. Provider-specific timeouts (e.g. EF Core PerShardCommandTimeout) are applied per shard.
Fail-fast by default (first exception cancels). Opt into best-effort via provider decorator registration (EF Core: DecorateShardQueryFailureStrategy).
Unordered merge supports bounded buffering via provider options (channel capacity). Use to smooth producer spikes or reduce memory.
Shardis.Query emits both tracing Activities and an OpenTelemetry Histogram for end-to-end merge latency.
Latency histogram:
shardis.query.merge.latencymsTag schema (stable):
db.system – storage system (e.g. postgresql)provider – logical provider identifier (e.g. efcore, inmemory, marten)shard.count – total configured shards in topologytarget.shard.count – shards actually targeted (respects WhereShard); equals shard.count when not targetedmerge.strategy – unordered | orderedordering.buffered – true when ordered path is a buffered/materialized variantfanout.concurrency – effective parallelism applied (may be lower than configured when targeted shard subset)channel.capacity – capacity for unordered merge channel (-1 when unbounded / not applicable)failure.mode – fail-fast | best-effort (best-effort: partial shard failures suppressed; result.status=ok if ≥1 shard succeeded, else failed)result.status – ok | canceled | failedroot.type – short CLR type name for the query root / projectioninvalid.shard.count – number of rejected targeted shard IDs (out of range / parse failure; zero when none)Tracing:
ActivitySource name: Shardis.Queryshard.count, target.shard.count, strategy, etc.) and timing spans.Enabling (OpenTelemetry example):
var meterProvider = Sdk.CreateMeterProviderBuilder()
.AddMeter("Shardis") // core
.AddMeter("Shardis.Query") // query-specific
.AddInMemoryExporter(out var exported) // or Prometheus / OTLP
.Build();
Buckets: by default rely on your metrics backend’s dynamic bucketing; for explicit views apply [5,10,20,50,100,200,500,1000,2000,5000] (milliseconds) to the histogram instrument.
Design rationale: see ADR 0006 (Unified Query Latency Single-Emission Model) https://github.com/veggerby/shardis/blob/main/docs/adr/0006-unified-query-latency-single-emission.md.
AddShardisQueryClient() after configuring an executor to enable ergonomic helpers.EfCoreShardQueryExecutor.CreateOrdered (materializes all results; suitable for bounded sets only).net8.0, net9.0; Shardis ≥ 0.1