The fastest cache library written in C# for items with set expiration time. Easy to use, thread-safe and light on memory. Optimized to scale from dozens to millions of items. Features lock-free reads and writes, allocation-free reads and automatic eviction. Credit to Vladimir Sadov for his implementation of NonBlocking.ConcurrentDictionary which is used as an underlying store.
$ dotnet add package FastCache.CachedHigh-performance, thread-safe and easy to use cache for items with set expiration time.
Optimized for both dozens and millions of items. Features lock-free reads and writes, allocation-free reads, low memory footprint per item and automatic eviction.
Credit to Vladimir Sadov for his implementation of NonBlocking.ConcurrentDictionary which is used as an underlying store.
dotnet add package FastCache.Cached or Install-Package FastCache.Cached
Get cached value or save a new one with expiration of 60 minutes
public FinancialReport GetReport(int month, int year)
{
if (Cached<FinancialReport>.TryGet(month, year, out var cached))
{
return cached.Value;
}
var report = // Expensive computation: retrieve data and calculate report
return cached.Save(report, TimeSpan.FromMinutes(60));
}
Wrap and cache the result of a regular method call
var report = Cached.GetOrCompute(month, year, GetReport, TimeSpan.FromMinutes(60));
Or an async one
// For methods that return Task<T> or ValueTask<T>
var report = await Cached.GetOrCompute(month, year, GetReportAsync, TimeSpan.FromMinutes(60));
Save the value to cache but only if the cache size is below limit
public FinancialReport GetReport(int month, int year)
{
if (Cached<FinancialReport>.TryGet(month, year, out var cached))
{
return cached.Value;
}
return cached.Save(report, TimeSpan.FromMinutes(60), limit: 2_500_000);
}
// GetOrCompute with maximum cache size limit.
// RAM is usually plenty but what if the user runs Chrome?
var report = Cached.GetOrCompute(month, year, GetReport, TimeSpan.FromMinutes(60), limit: 2_500_000);Add new data without accessing cache item first (e.g. loading a large batch of independent values to cache)
using FastCache.Extensions;
...
foreach (var ((month, year), report) in reportsResultBatch)
{
report.Cache(month, year, TimeSpan.FromMinutes(60));
}Store common type (string) in a shared cache store (other users may share the cache for the same parameter type, this time it's int)
// GetOrCompute<...V> where V is string.
// To save some other string for the same 'int' number simultaneously, look at the option below :)
var userNote = Cached.GetOrCompute(userId, GetUserNoteString, TimeSpan.FromMinutes(5));Or in a separate one by using value object (Recommended)
readonly record struct UserNote(string Value);
// GetOrCompute<...V> where V is UserNote
var userNote = Cached.GetOrCompute(userId, GetUserNote, TimeSpan.FromMinutes(5));// This is how it looks for TryGet
if (Cached<UserNote>.TryGet(userId, out var cached))
{
return cached.Value;
}
...
return cached.Save(userNote, TimeSpan.FromMinutes(5));(string, CustomEnum, int) together with the type of cached value - composite keys are structurally evaluated for equality, different combinations will correspond to different cache itemsEnvironment.TickCount64 which is also significantly faster than DateTime.UtcNowBenchmarkDotNet=v0.13.1, OS=Windows 10.0.22000
AMD Ryzen 7 5800X, 1 CPU, 16 logical and 8 physical cores
.NET 6.0.5 (6.0.522.21309), X64 RyuJIT| Method | Mean | Error | StdDev | Gen 0 | Gen 1 | Allocated |
|---|---|---|---|---|---|---|
| Get: FastCache.Cached | 15.92 ns | 0.367 ns | 0.941 ns | - | - | - |
| Get: MemoryCache | 58.93 ns | 1.207 ns | 1.239 ns | - | - | - |
| Get: CacheManager | 167.03 ns | 3.395 ns | 9.002 ns | 0.0105 | - | 176 B |
| Get: LazyCache | 74.46 ns | 1.510 ns | 2.214 ns | - | - | - |
| Add/Upd: FC.Cached | 34.57 ns | 0.920 ns | 2.711 ns | 0.0024 | - | 40 B |
| Add/Upd: MemoryCache | 206.15 ns | 4.127 ns | 8.049 ns | 0.0134 | - | 224 B |
| Add/Upd: CacheManager | 1,052.22 ns | 20.926 ns | 27.209 ns | 0.0744 | - | 1,248 B |
| Add/Upd: LazyCache | 281.60 ns | 3.984 ns | 3.532 ns | 0.0286 | - | 480 B |
cached.Save(param1...param7, expiration) which will either add or replace existing value updating its expirationThroughput saturation means that all necessary data structures are fully available in the CPU cache and branch predictor has learned branch patters of the executed code. This is only possible in scenarios such as items being retrieved or added/updated in a tight loop or very frequently on the same cores. This means that real world performance will not saturate maximum throughput and will be bottlenecked by memory access latency and branch misprediction stalls. As a result, you can expect resulting performance variance of 1-10x min latency depending on hardware and outside factors.