High-performance B+Tree index engine for .NET. Works with MemoryStream or FileStream, fixed-size pages, and deterministic memory use. Suitable for embedded systems, POS terminals, kiosks, scanners, and edge devices. Supports balanced insert/erase, leaf-chain traversal, and deferred flush write-back caching. Ideal for KV stores, embedded storage engines, OLTP index tables, queue logs, and append-only logs.
$ dotnet add package BTreePlusHigh-performance, file-backed B+Tree engine for .NET — up to 7× faster than SQLite on 1B-row workloads.
2.8–4.0 million inserts/sec on NVMe
Zero dependencies · .NET Standard 2.0 · Embeddable · Deterministic performance
👉 If you’re evaluating BTreePlus for production and want help with design or tuning, email btplus@mmhsys.com.
☕ If this library saves you days of work, consider buying me a coffee: https://buymeacoffee.com/koyllis
Documentation 👉 https://mmhsys.com/BTreePlus.pdf
Most storage engines are slow because they are generic.
BTreePlus is purpose-built for high-throughput inserts, sorted key lookups, and range scans — ideal for POS/ERP secondary indexes, logs, scanners, kiosks, IoT, edge devices, analytics, and custom storage engines.
BTreePlus is the core of a database index — available as a lightweight embedded library.
Same hardware, NVMe, 32KB pages, .NET 9.0, Linux:
| Rows | SQLite | BTreePlus Pro | Improvement |
|---|---|---|---|
| 4M | 4.53 s | 1.03 s | 4.4× |
| 1B | 2410 s | 346 s | 7× |
Workload: bulk insert of 1,000,000,000 records (14-byte payload each)
Hardware: Intel i9-13900, Samsung 990 PRO NVMe
Durability: WAL enabled for both engines
| Engine | Time |
|---|---|
| PostgreSQL | 6 min 08 s |
| BTreePlus | 5 min 10 s |
➡ BTreePlus is ~16% faster than PostgreSQL under identical WAL/durable settings.
Pro Edition cache required for full performance.
Commit()If you know why you need a B+Tree, you already understand what this is.
BTreePlus is ideal for environments where:
Typical use cases include POS terminals, kiosks, scanners, ERP/POS secondary indexes, industrial controllers, IoT, edge devices, and local-first applications.
Community Edition is full CRUD with direct file I/O. Pro Edition adds caching, sharding, and range scans for high-volume workloads.
The Community Edition includes the essential functionality for embedded indexes:
| Operation | Description |
|---|---|
Insert() | Insert or upsert a key/value pair |
Find() | Lookup a key and read its value |
Bof() | Move cursor to the first key |
Eof() | Move cursor to the last key |
Next() | Move forward in sorted order |
Prev() | Move backward in sorted order |
Erase() | Erase a record |
Count | Number of records |
The Pro Edition unlocks the full storage engine:
Enables aggressive caching of internal and leaf pages:
This is the same cache system used in the 1B-row benchmark.
Efficient iteration over a key range:
bt.Range(fromKey, toKey, callback);
Use cases:
Built-in partitioning for massive datasets:
| Feature | Community | Pro |
|---|---|---|
| Insert / Find / Next / Prev | ✔ | ✔ |
| Bof / Eof | ✔ | ✔ |
| Direct file I/O (no cache) | ✔ | ✔ |
| High-performance page cache | ✖ | ✔ |
| Erase() deletion | ✔ | ✔ |
| Range scan API | ✖ | ✔ |
| Sharding (horizontal scaling) | ✖ | ✔ |
| Large-scale performance (1B+ rows) | ✖ | ✔ |
| Enterprise support | ✖ | ✔ |
| Check and stats | ✖ | ✔ |
| Bulk Insert | ✖ | ✔ |
For Pro licensing: btplus@mmhsys.com
pageSize = number of key/value records per page (not bytes).
Allowed range: 1 to 128
For file-backed trees:
Physical page size = pageSize × 512 bytes
| pageSize | Behavior | Typical Use |
|---|---|---|
| 1–8 | Small pages, small memory footprint | Embedded / resource-constrained systems |
| 16–32 | Balanced depth and performance | General workloads |
| 64-128 | Large pages, minimal tree height | NVMe / SSD, high-throughput workloads |
Default: pageSize: 8 -> 4K
using mmh;
var bt = BTree.CreateMemory(
keyBytes: 10,
dataBytes: 4,
pageSize: 8,
enableCache: false, // cache disabled in Community Edition
max_recs: 10_000_000,
balance: false);
byte[] k = Encoding.UTF8.GetBytes("1101234567");
byte[] v = BitConverter.GetBytes(1234);
bt.Insert(k, v);
bt.Commit();
bt.Close(); // optional
using mmh;
var bt = BTree.CreateOrOpen(
path: "data.btp",
keyBytes: 10,
dataBytes: 4,
pageSize: 16,
enableCache: false, // cache disabled in Community Edition
max_recs: 32_000_000,
balance: true);
byte[] k = Encoding.UTF8.GetBytes("1101234567");
byte[] v = BitConverter.GetBytes(1234);
bt.Insert(k, v);
bt.Commit();
bt.Close();
Reopen later:
var bt = BTree.Open("data.btp", enableCache: false, balance: true);
bt.Bof(); // move to first leaf key
ReadOnlySpan<byte> key;
Span<byte> data = stackalloc byte[4];
while (bt.Next(out key, data))
{
// use 'key' and 'data'
}
Span<byte> key = stackalloc byte[8];
BitConverter.GetBytes(1234L).CopyTo(key);
Span<byte> value = stackalloc byte[16];
if (bt.Find(key, value))
{
string text = Encoding.ASCII.GetString(value);
Console.WriteLine($"FOUND → {text}");
}
using mmh;
// Create a BTree with fixed key/value sizes
var tree = BTree.CreateMemory(8, 8);
// Create an Int64 → Int64 dictionary
var dict = BTreeDictionaryFactory.CreateLongToLong(tree);
// Add / overwrite
dict.Add(10L, 100L);
dict[20L] = 200L;
// Read
var v = dict[10L]; // 100
bool ok = dict.ContainsKey(20L);
// Remove
dict.Remove(10L);
// Enumerate
foreach (var kv in dict)
{
Console.WriteLine($"{kv.Key} = {kv.Value}");
}
Commit()Commit(), the tree will reopen in a consistent stateThis design keeps the engine extremely fast and lightweight.
MIT License
For Pro Edition licensing, feature access, or enterprise support:
BTreePlus — When SQLite is too slow, and performance matters.