General-purpose library for full-stack use. Provides support for Caching, Collections, Comparers, Converion, Environment, Events, Exceptions, IO, FileSystem, Lambdas, Logging, Math, Memory, Scopes, Text, Threading, Scheduling and many other aspects.
$ dotnet add package Sphere10.FrameworkDeveloper: Herman Schoenfeld
Copyright: © 2018-Present Herman Schoenfeld
License: MIT NON-AI
Status: Production-Ready
Sphere10 Framework is a low-level, high-performance .NET utility library providing composable data structures and persistence primitives. It excels at scenarios requiring fine-grained control over memory, serialization, and transactional semantics—think blockchain systems, embedded databases, high-volume analytics, and custom storage layers.
Unlike general-purpose libraries, Sphere10 Framework doesn't provide application frameworks or abstractions. Instead, it offers:
Key Attributes
dotnet add package Sphere10.Framework
The Tools namespace is a defining feature of Sphere10 Framework, providing a global, IntelliSense-discoverable collection of static utility methods organized by domain. This acts as a single point of discovery for developers—instead of searching for the right helper class, simply type Tools. and explore available operations across the entire framework.
📖 Getting Started: See the Tools Reference for the complete catalog and usage patterns.
Framework-Wide Utilities
Windows Integration
Web & Networking
Database Access
Mobile Platforms
Application Framework
using Sphere10.Framework; // Import the framework namespace to access Tools
// String operations
string sanitized = Tools.Text.RemoveWhitespace(userInput);
string truncated = Tools.Text.Truncate(longString, 100);
// Collection operations
var filtered = Tools.Collection.Where(items, x => x.IsActive);
var flattened = Tools.Collection.Flatten(nestedList);
// Cryptographic operations
byte[] hash = Tools.Crypto.SHA256(data);
bool isValid = Tools.Crypto.VerifySignature(data, signature);
// JSON serialization
string json = Tools.Json.Serialize(obj);
var deserialized = Tools.Json.Deserialize<MyType>(json);
// File operations
string tempFile = Tools.FileSystem.GenerateTempFilename();
Tools.FileSystem.WriteAllText(path, content);
// Database access
var dbConnection = Tools.Sqlite.Create(connectionString);
var dataAdapter = Tools.MSSql.CreateAdapter(connectionString);
// Network utilities
bool isOnline = Tools.Network.IsInternetAvailable();
string publicIP = Tools.Network.GetPublicIPAddress();
// Windows-specific (Windows platform only)
bool serviceRunning = Tools.WinTool.IsServiceRunning("MyService");
Tools.WinTool.StartService("MyService");
// Web utilities (ASP.NET Core context)
string sanitized = Tools.Web.Html.SanitizeHtml(userHtml);
var actionResult = Tools.Web.AspNetCore.CreateResponse(data);
The Tools namespace embodies several key principles:
Tools. to explore all available operations in IntelliSenseTools.WinTool, Tools.iOSTool)New projects in the framework define their own tool classes:
// In Sphere10.Framework.Windows
namespace Tools;
public static class WinTool {
public static bool IsServiceRunning(string serviceName) { /* ... */ }
public static void StartService(string serviceName) { /* ... */ }
}
// In Sphere10.Framework.Data.MSSQL
namespace Tools;
public static class MSSqlTool {
public static IDataAccessCommand CreateAdapter(string connectionString) { /* ... */ }
}
// In Sphere10.Framework.Web.AspNetCore
namespace Tools.Web;
public static class AspNetCoreTool {
public static IActionResult CreateResponse<T>(T data) { /* ... */ }
}
When a new domain adds its own tool class, it automatically becomes discoverable alongside all other tools.
Data Structures & Collections
Persistence & Serialization
Transactions & Scoping
Cryptography & Security
Streams & I/O
Utilities & Extensions
Type System & Reflection
Data & Resources
Data Access & Persistence
Networking & Communications
Cryptography & Consensus
Desktop & Platform Integration
Application Framework
Testing & Quality
Cross-Platform & Generators
Core Library Tests
Related Project Tests
Composability: The library is structured around small, focused abstractions that compose predictably. Decorators, adapters, and interfaces allow developers to layer functionality without tight coupling.
Explicit Control: Sphere10 Framework favors explicitness over magic. Memory allocation strategies, serialization formats, caching policies, and locking semantics are configurable rather than hidden behind opaque defaults.
Performance-Conscious: Many components are optimized for batch operations, memory locality, and reduced allocations. The library provides both in-memory and stream-backed variants of collections to accommodate different performance/capacity tradeoffs.
Extensibility: Core abstractions like IItemSerializer<T>, IExtendedList<T>, and ITransactionalScope are designed to be implemented or decorated by user code. The library provides building blocks rather than closed systems.
Correctness: Transaction-aware data structures emphasize ACID semantics where applicable. Merkle tree implementations prioritize cryptographic correctness. Thread-safety guarantees are explicit and documented.
Sphere10 Framework provides an extensive suite of collection types that extend beyond the standard .NET collections:
IExtendedList<T> and IExtendedCollection<T> support range-based operations (batch reads, writes, insertions, deletions) for improved performance when working with large datasets.StreamMappedList<T>, StreamMappedDictionary<TKey, TValue>, and StreamMappedHashSet<T> persist their data to streams, enabling collections that exceed available memory while maintaining list/dictionary/set semantics.IRecyclableList<T> and its implementations maintain a pool of reusable indices for deleted items, optimizing scenarios with frequent insertions and deletions.SynchronizedExtendedList<T>, SynchronizedDictionary<TKey, TValue>, and ProducerConsumerQueue<T> for concurrent scenarios.The ClusteredStreams subsystem provides a sophisticated mechanism for managing multiple logical streams within a single underlying stream. This enables:
IClusteredStreamsAttachment components allow behaviors like indexing, merkle-tree maintenance, and key storage to be composed declaratively.This architecture underpins the library's stream-mapped collections and object spaces, providing a flexible foundation for custom persistence schemes.
Object spaces abstract the storage and retrieval of typed objects across multiple "dimensions" (logical tables). Key capabilities include:
Object spaces are suitable for lightweight embedded databases, event stores, and other scenarios requiring structured persistence without a full database engine.
Sphere10 Framework includes multiple merkle-tree implementations optimized for different use cases:
These implementations integrate with collections, enabling IMerkleList<T>, IMerkleDictionary<TKey, TValue>, and IMerkleSet<T> variants that maintain cryptographic integrity proofs alongside their data.
The library provides cryptographic primitives and utilities:
Sphere10 Framework's serialization framework is designed for efficiency, control, and extensibility:
IItemSerializer<T> abstraction enables custom serialization logic for any type. Serializers can be composed, decorated, and registered in a SerializerFactory.[KnownSubType] attributes.The framework integrates deeply with the library's collections and storage primitives, ensuring that persistence strategies are explicit and customizable.
The transactional subsystem provides ACID guarantees for in-memory and file-backed data structures:
ITransactionalScope defines a protocol for commit/rollback operations. Context-aware scopes allow nested transactions and isolation.TransactionalList<T>, TransactionalDictionary<TKey, TValue>, and TransactionalHashSet<T> provide ACID semantics over persistent storage.TransactionalStream wraps a stream with commit/rollback capabilities, enabling atomic multi-operation updates.FileTransaction and FileTransactionScope coordinate file-system operations within a transactional boundary.These primitives enable building robust, crash-recoverable data stores without relying on external database engines.
The caching subsystem offers flexible, policy-driven caching mechanisms:
The protocol subsystem facilitates structured, bidirectional communication between peers:
ProtocolOrchestrator manages message dispatch, handshake workflows, and request/response correlation.This framework is suitable for building custom RPC mechanisms, control protocols, or peer-to-peer communication layers.
Sphere10 Framework extends .NET's stream abstractions with specialized implementations:
IBuffer as the backing store instead of a contiguous byte array, enabling arbitrarily large in-memory streams with paging support.A flexible, composable logging framework:
ILogger defines a simple, level-based logging interface.Utilities for managing concurrency and synchronization:
ProducerConsumerLock, NonReentrantLock, FastLock, and minimal semaphore implementations.Critical<T> and CriticalObject encapsulate objects with lock-based access.ProducerConsumerQueue<T> provides bounded/unbounded thread-safe queuing with async support.A job scheduling framework with support for various triggers:
Comprehensive string manipulation and validation helpers:
Efficient, space-optimizing encoding schemes:
Utilities for controlling and optimizing memory usage:
IBuffer represents a sequence of bytes that can be memory-resident or memory-mapped.Map and transform objects between representations:
Flexible conversion utilities for type coercion:
Low-level math helpers and calculations:
Utilities for working with value types and immutable structures:
Build custom comparison and equality implementations:
IComparer<T> instances with fluent composition (field-by-field, descending, custom).IEqualityComparer<T> for custom equality logic.Reflection utilities and type analysis:
Helpers for type checks and resolution:
Custom attributes for annotating types and members:
Core abstractions for building extensible frameworks:
Fluent extensions for .NET types:
Batch, Chunk, Distinct, GroupBy variants with custom comparers.TryAdd, AddOrUpdate, GetOrAdd with custom logic.Map, FlatMap, Filter for more expressive LINQ alternatives.Abstractions for querying data from various sources:
Utilities for hardware and peripheral interaction:
Low-level network helpers:
Query system and runtime information:
Catch-all category for specialized helpers:
Helpers for object manipulation and introspection:
Generic filtering framework:
IFilter<T> implementations for filtering collections.Support for functional programming patterns:
Simplify implementing proper disposal:
DisposableBase and DisposableObject handle disposal protocol.Framework for event routing and aggregation:
Utilities for robust error handling:
Framework for loading and caching resources:
Already covered above in detail.
Framework for objects that maintain persistent state:
Simplified persistence for straightforward scenarios:
Calculate sizes and offsets:
Efficient working with Span<T> and Memory<T>:
Memory<T> from pools.Span, Memory, and arrays safely.Extensions and helpers for TextWriter:
50+ Extension Methods covering:
StringExtensions: Truncation, case handling, validation, parsing, formattingEnumerableExtensions: Filtering, grouping, transformation, batchingTaskExtensions: Async utilities, timeout handling, retry logicStreamExtensions: I/O operations, reading/writing helpersTypeExtensions: Reflection helpers, type resolutionIExtendedList<T> extends the standard IList<T> interface with range-based operations: ReadRange, UpdateRange, InsertRange, and RemoveRange. These methods accept long indices and counts, supporting collections larger than 2GB. Implementations are expected to optimize batch operations rather than iterating element-by-element.
Stream-mapped collections persist their data to streams using serializers and cluster-based storage. They behave like standard collections but their contents reside on disk (or any stream) rather than entirely in memory. This enables collections to scale beyond available RAM while maintaining familiar APIs.
ObjectStream<T> is a low-level primitive for storing a sequence of serialized objects in a stream, along with metadata and indexes. It underpins stream-mapped collections and object spaces, providing features like:
SerializationContext tracks object references and cycles during serialization/deserialization. When a reference-type object is serialized, the context checks if it has been seen before. If so, a reference marker is emitted rather than re-serializing the object. This enables correct handling of cyclic graphs and repeated references.
ITransactionalScope defines a protocol for ACID transactions:
BeginTransaction(): Start a new transaction.CommitTransaction(): Persist changes.RollbackTransaction(): Discard changes.Context-aware scopes (subclasses of ContextScope) track active transactions within the call context, enabling nested transactions and isolation semantics. Transactional collections and streams implement ITransactionalObject to participate in these scopes.
A MerkleCoordinate identifies a node within a merkle tree by its level and index. Merkle proofs are represented as sequences of MerkleNode instances, which can be verified against a root hash to confirm the presence and position of specific leaves. The library's merkle implementations expose methods for generating proofs and verifying them efficiently.
Many components follow the decorator pattern, allowing behavior to be layered:
StreamDecorator: Wrap a stream to add logging, profiling, or transaction support.ListDecorator<T>: Augment list behavior without reimplementing the entire interface.ItemSerializerDecorator<T>: Transform serialization logic (e.g., add null-handling or reference-tracking).Adapters convert between related interfaces (e.g., IList<T> to IExtendedList<T>) to integrate external code with Sphere10 Framework's abstractions.
Sphere10 Framework's architecture is organized into largely independent subsystems that compose through well-defined interfaces:
Collections Layer: Extended list and collection interfaces define the foundation. Implementations range from simple in-memory structures to complex stream-mapped and paged variants.
Storage Layer: Clustered streams provide the underlying mechanism for multi-stream persistence. Object streams layer serialization, indexing, and metadata on top of clustered streams.
Serialization Framework: A registry-based system (SerializerFactory) maps types to serializers. Serializers compose via decorators for features like null-handling, polymorphism, and reference-tracking.
Transactional Framework: Transactional scopes coordinate commit/rollback across multiple objects. Collections, streams, and object spaces implement ITransactionalObject to participate.
Merkle Trees: Separate implementations provide different tradeoffs. Merkle-aware collections integrate tree maintenance into their mutation operations.
Utilities and Extensions: Comparers, operators, logging, threading, and I/O utilities provide cross-cutting functionality without coupling to core abstractions.
Data typically flows from application code through collections or object spaces, which delegate to object streams for persistence. Object streams use clustered streams for storage and serializers for encoding. Transactional scopes coordinate mutations, and merkle trees provide integrity proofs where enabled.
NuGet Packages:
# Core library
dotnet add package Sphere10 Framework
# Platform-specific (optional)
dotnet add package Sphere10.Framework.Windows # Windows utilities
dotnet add package Sphere10.Framework.Windows.Forms # WinForms integration
dotnet add package Sphere10.Framework.Windows.LevelDB # Native LevelDB wrapper
dotnet add package Sphere10.Framework.CryptoEx # Extended cryptography (ECDSA, ECIES)
dotnet add package Sphere10.Framework.Communications # Networking & protocols
dotnet add package Sphere10.Framework.Web.AspNetCore # ASP.NET Core integration
Or reference compiled assemblies directly in your project.
using Sphere10.Framework;
using System.IO;
// BinarySerializer: Efficient binary serialization
var serializer = new BinarySerializer<string>();
var stream = new MemoryStream();
var context = new SerializationContext();
serializer.Serialize(context, "Hello World", stream);
// StreamMappedList: Disk-backed collection (no memory limit)
using var fileStream = new FileStream("data.bin", FileMode.Create, FileAccess.ReadWrite);
var list = new StreamMappedList<string>(fileStream, new StringSerializer(Encoding.UTF8));
list.Add("Persisted Item 1");
list.Add("Persisted Item 2");
list.Save();
// FlatMerkleTree: Cryptographic proof of integrity
var tree = new FlatMerkleTree(CHF.SHA2_256);
tree.Leafs.AddRange(Encoding.UTF8.GetBytes("Block 1"), Encoding.UTF8.GetBytes("Block 2"));
var root = tree.Root; // Root hash proves all items
// Synchronized collections: Thread-safe variants
var syncList = new SynchronizedExtendedList<int>();
syncList.Add(42); // Automatically locked during mutation
BinarySerializer: Efficient Binary Serialization
using Sphere10.Framework;
using System.IO;
// Serialize primitive types
var binarySerializer = new BinarySerializer<int>();
var stream = new MemoryStream();
var context = new SerializationContext();
// Write an integer
binarySerializer.Serialize(context, 42, stream);
// Read it back
stream.Position = 0;
var value = binarySerializer.Deserialize(context, stream);
Console.WriteLine(value); // 42
// For custom objects, use ItemSerializer
class Product {
public int Id { get; set; }
public string Name { get; set; }
}
var productStream = new MemoryStream();
var prodContext = new SerializationContext();
var productBytes = new Product { Id = 1, Name = "Widget" };
// BinarySerializer compresses data efficiently
// Best used with constant-size serializers for indexed access
var constSizeSerializer = new BinarySerializer<int>();
var offsets = new List<long>();
for (int i = 0; i < 1000; i++) {
offsets.Add(productStream.Position);
constSizeSerializer.Serialize(prodContext, i, productStream);
}
// Now you can seek directly to any index without scanning
productStream.Seek(offsets[500], SeekOrigin.Begin);
StreamMappedList: Disk-Backed Collections
using Sphere10.Framework;
using System.IO;
// Create a collection that persists to disk
using var fileStream = new FileStream("inventory.dat", FileMode.Create, FileAccess.ReadWrite);
// StreamMappedList supports massive collections (limited only by disk space)
var inventory = new StreamMappedList<Product>(
fileStream,
new CustomProductSerializer(), // Your serializer
autoLoad: false
);
// Add items (written to disk immediately)
inventory.Add(new Product { Id = 1, Name = "Widget", Price = 9.99m });
inventory.Add(new Product { Id = 2, Name = "Gadget", Price = 19.99m });
inventory.Add(new Product { Id = 3, Name = "Doohickey", Price = 14.99m });
// Efficient batch operations
inventory.AddRange(new[] {
new Product { Id = 4, Name = "Thingamajig", Price = 24.99m },
new Product { Id = 5, Name = "Whatsit", Price = 12.99m }
});
// Save index to disk
inventory.Save();
// Later, reload from disk (only index is loaded into memory)
using var reloadStream = new FileStream("inventory.dat", FileMode.Open, FileAccess.Read);
var reloaded = new StreamMappedList<Product>(reloadStream, new CustomProductSerializer(), autoLoad: true);
// Access items (loaded from disk as needed)
var firstItem = reloaded[0]; // Reads from disk
var batch = reloaded.ReadRange(1, 3); // Batch read is more efficient
// StreamMappedList with checksums for integrity
using var checkedStream = new FileStream("checked.dat", FileMode.Create, FileAccess.ReadWrite);
var checkedList = new StreamMappedList<string>(
checkedStream,
new StringSerializer(Encoding.UTF8),
itemChecksummer: new ObjectHashCodeChecksummer<string>(),
reservedStreams: 1,
policy: ClusteredStreamsPolicy.Default
);
checkedList.Add("Important data");
checkedList.Save();
// Checksums verify data wasn't corrupted on disk
StreamPagedList: Memory-Paged Disk Collections
using Sphere10.Framework;
using System.IO;
// StreamPagedList loads pages into memory as needed (more efficient for sequential access)
using var pagedStream = new FileStream("pages.dat", FileMode.Create, FileAccess.ReadWrite);
var pagedList = new StreamPagedList<string>(
new StringSerializer(Encoding.UTF8),
pagedStream,
pageSize: 4096 // 4KB pages, tuned for your access patterns
);
// Add thousands of items
for (int i = 0; i < 100_000; i++) {
pagedList.Add($"Item {i}");
}
// Sequential access is fast (page already in memory)
for (int i = 0; i < 10; i++) {
Console.WriteLine(pagedList[i]);
}
// Random access loads the needed page
var item50000 = pagedList[50000];
// For constant-size items, use constant-size serializer for direct indexing
var constSizeList = new StreamPagedList<string>(
new StringSerializer(Encoding.UTF8).AsConstantSize(50), // Fixed 50-byte strings
pagedStream,
pageSize: 4096
);
// Now can directly calculate position: position = itemIndex * itemSize
// Without scanning through variable-length items
FlatMerkleTree: Cryptographic Integrity Proofs
using Sphere10.Framework;
using System.Security.Cryptography;
// Create a flat merkle tree (all nodes in memory, optimal for proof generation)
var tree = new FlatMerkleTree(CHF.SHA2_256);
// Add data (hashed immediately)
var data = new[] {
Encoding.UTF8.GetBytes("Transaction 1"),
Encoding.UTF8.GetBytes("Transaction 2"),
Encoding.UTF8.GetBytes("Transaction 3"),
Encoding.UTF8.GetBytes("Transaction 4")
};
tree.Leafs.AddRange(data);
// Get root hash (proof that all items are included)
var rootHash = tree.Root;
Console.WriteLine($"Root: {Convert.ToHexString(rootHash)}");
// Generate proof for a specific item (prove item 2 is in tree)
var proof = tree.GenerateProof(2); // Generates merkle path
var leaf = tree.Leafs[2];
// Verify the proof (can be done independently)
bool verified = MerkleTreeUtilities.VerifyProof(
leaf,
proof,
rootHash,
CHF.SHA2_256
);
// FlatMerkleTree is ideal for:
// - Blockchain blocks (fixed number of transactions)
// - Smaller merkle trees where full tree fits in memory
// - Frequent proof generation
// Compare with LongMerkleTree for massive datasets
var longTree = new LongMerkleTree(CHF.SHA2_256);
// LongMerkleTree only keeps sub-root hashes in memory
// Can handle millions of items with minimal memory overhead
// But proof generation requires computing intermediate hashes
LongMerkleTree: Memory-Efficient Merkle Trees
using Sphere10.Framework;
// LongMerkleTree: For massive datasets (millions of items)
// Only stores sub-root hashes, not all nodes
var tree = new LongMerkleTree(CHF.SHA2_256);
// Append items efficiently
for (int i = 0; i < 1_000_000; i++) {
var data = Encoding.UTF8.GetBytes($"Item {i}");
tree.Leafs.AddRange(data);
}
// Root hash proves integrity of all million items
var root = tree.Root;
// Generate proof for an item
var proof = tree.GenerateProof(500_000);
// Verify proof (works same as FlatMerkleTree)
var leaf = tree.Leafs[500_000];
bool verified = MerkleTreeUtilities.VerifyProof(
leaf,
proof,
root,
CHF.SHA2_256
);
// LongMerkleTree advantages:
// - O(1) memory for append operations
// - Can handle unlimited items
// - Perfect for blockchain, event logs, append-only stores
// Size information
var size = tree.Size;
Console.WriteLine($"Leaf count: {size.LeafCount}");
Console.WriteLine($"Tree depth: {size.Depth}");
Synchronized Collections: Thread-Safe Wrappers
using Sphere10.Framework;
using System.Collections.Generic;
using System.Threading.Tasks;
// SynchronizedExtendedList: Thread-safe variant of ExtendedList
var syncList = new SynchronizedExtendedList<int>();
// Safe for concurrent access from multiple threads
var tasks = new List<Task>();
for (int t = 0; t < 10; t++) {
tasks.Add(Task.Run(() => {
for (int i = 0; i < 1000; i++) {
syncList.Add(i); // Automatically locked
}
}));
}
Task.WaitAll(tasks.ToArray());
Console.WriteLine($"Total items: {syncList.Count}"); // 10,000 safely
// SynchronizedDictionary: Thread-safe key-value pairs
var syncDict = new SynchronizedDictionary<string, Account>();
var producer = Task.Run(() => {
for (int i = 0; i < 100; i++) {
syncDict[$"account_{i}"] = new Account { Id = i, Balance = 100m };
}
});
var consumer = Task.Run(() => {
System.Threading.Thread.Sleep(50); // Let producer add some
foreach (var key in syncDict.Keys) {
var account = syncDict[key];
Console.WriteLine($"{key}: {account.Balance}");
}
});
Task.WaitAll(producer, consumer);
// SynchronizedRepository: Cached, thread-safe data access
var syncRepo = new SynchronizedRepository<int, Product>(
loadFunc: id => FetchProductFromDatabase(id)
);
// Thread-safe get (with automatic caching)
var product1 = syncRepo.Get(1);
var product2 = syncRepo.Get(2);
// Multiple threads can safely access the cache
var readTasks = Enumerable.Range(0, 100)
.Select(i => Task.Run(() => syncRepo.Get(i % 10)))
.ToArray();
Task.WaitAll(readTasks);
// Synchronized collection types available:
// - SynchronizedExtendedList<T>
// - SynchronizedDictionary<TKey, TValue>
// - SynchronizedSet<T>
// - SynchronizedQueue<T>
// - SynchronizedHeap<T>
// All use internal locking for thread safety
class Account {
public int Id { get; set; }
public decimal Balance { get; set; }
}
class Product {
public int Id { get; set; }
public string Name { get; set; }
public decimal Price { get; set; }
}
Product FetchProductFromDatabase(int id) {
return new Product { Id = id, Name = $"Product {id}", Price = 9.99m };
}
class CustomProductSerializer : IItemSerializer<Product> {
public void Serialize(ISerializationContext context, Product item, Stream stream) {
// Custom serialization logic
}
public Product Deserialize(ISerializationContext context, Stream stream) {
// Custom deserialization logic
return new Product();
}
}
Advanced StreamMappedList Usage:
The StreamMappedList shown above demonstrates the core disk-backed storage pattern. For additional persistence options:
itemChecksummer to verify data integrity (detects corruption)reservedStreams to attach additional metadata streamsClusteredStreamsPolicy to control cluster allocationTransactional Scopes:
// Transactional boundaries for ACID operations
var dict = new TransactionalDictionary<string, Account>();
using (var scope = dict.BeginScope()) {
using (var txn = scope.BeginTransaction()) {
dict["acc1"] = new Account { Balance = 1000 };
dict["acc2"] = new Account { Balance = 500 };
// Auto-rollback if exception occurs
txn.Commit(); // Explicit commit for atomicity
}
}
// Transactional dictionary backed by file
var persistedDict = new TransactionalDictionary<string, Account>();
using (var scope = persistedDict.BeginScope()) {
using (var txn = scope.BeginTransaction()) {
persistedDict["acc001"] = new Account { Balance = 1000 };
persistedDict["acc002"] = new Account { Balance = 500 };
txn.Commit(); // Atomic commit to disk
}
// If exception occurs, automatic rollback
}
// Verify persistence
using (var scope2 = persistedDict.BeginScope()) {
using (var txn2 = scope2.BeginTransaction()) {
var acc1 = persistedDict["acc001"]; // Data persisted
Console.WriteLine(acc1.Balance); // 1000
}
}
The serialization system supports reference-tracked graphs, polymorphic types, and custom decorators. See the Core Examples section above for BinarySerializer patterns. Refer to IItemSerializer<T> interface and SerializerFactory for custom implementations.
Merkle List with Integrity Proofs:
using Sphere10.Framework;
using System.Security.Cryptography;
// Create a merkle-aware list with SHA-256
var hasher = new HashAlgorithmAdapter(SHA256.Create());
var merkleList = new FlatMerkleList<string>(
ItemSerializer.Default<string>(),
hasher
);
merkleList.Add("Block 1");
merkleList.Add("Block 2");
merkleList.Add("Block 3");
// Get root hash (commitment to entire list)
byte[] rootHash = merkleList.MerkleTree.Root;
// Generate proof that \"Block 2\" is at index 1
var proof = merkleList.MerkleTree.GenerateProof(1);
// Verify proof independently
var isValid = merkleList.MerkleTree.VerifyProof(
merkleList.GetItemHash(1),
1,
proof,
rootHash
);
Console.WriteLine($"Proof valid: {isValid}"); // true
Merkle Dictionary (Multiple Keys):
var merkleDictionary = new MerkleListAdapter<KeyValuePair<string, int>>(
new ExtendedList<KeyValuePair<string, int>>(),
hasher
);
merkleDictionary.Add(new KeyValuePair<string, int>("Alice", 100));
merkleDictionary.Add(new KeyValuePair<string, int>("Bob", 50));
// Prove integrity of multi-item state
var multiProof = merkleDictionary.MerkleTree.GenerateMultiProof(new[] { 0, 1 });
Built-in Serializers (Default):
using Sphere10.Framework;
// Simple type serialization
var intSerializer = ItemSerializer<int>.Default;
byte[] bytes = intSerializer.Serialize(42);
int restored = intSerializer.Deserialize(bytes);
// Supports complex types automatically
var listSerializer = ItemSerializer<ExtendedList<string>>.Default;
var list = new ExtendedList<string> { "a", "b", "c" };
var serialized = listSerializer.Serialize(list);
var deserialized = listSerializer.Deserialize(serialized);
Custom Serializer Factory with Type Registration:
var factory = new SerializerFactory();
// Register primitives with specific strategies
factory.Register(
typeof(string),
new StringSerializer(SizeDescriptorStrategy.UseVarInt)
);
// Register custom type
factory.Register(
typeof(MyObject),
new MyObjectSerializer(factory)
);
// Retrieve and use
var serializer = factory.GetSerializer(typeof(MyObject));
var data = serializer.Serialize(myObj);
Polymorphic Serialization (Inheritance Support):
// Animal is abstract; Dog and Cat inherit from it
// Mark subtypes with [KnownSubType]
[KnownSubType(typeof(Dog))]
[KnownSubType(typeof(Cat))]
public abstract class Animal { /* ... */ }
// Default serializer automatically handles polymorphism
var animalSerializer = ItemSerializer<Animal>.Default;
var animals = new ExtendedList<Animal> {
new Dog("Fido"),
new Cat("Mittens")
};
byte[] bytes = animalSerializer.Serialize(animals);
var restored = animalSerializer.Deserialize(bytes);
Console.WriteLine(restored[0].GetType()); // Dog ✓
Console.WriteLine(restored[1].GetType()); // Cat ✓
Reference-Tracked Serialization (Graph Preservation):
// When serializing object graphs with repeated references
// or cycles, use reference serializers to preserve identity
class Node {
public string Value { get; set; }
public Node Next { get; set; }
}
var factory = new SerializerFactory();
var refSerializer = new NodeSerializer().AsReferenceSerializer();
factory.Register(typeof(Node), refSerializer);
// Circular linked list: A -> B -> A
var a = new Node { Value = "A" };
var b = new Node { Value = "B", Next = a };
a.Next = b;
byte[] data = refSerializer.Serialize(a);
var restored = refSerializer.Deserialize(data);
// Identity preserved: restored.Next.Next == restored ✓
Compact Integer Encoding (VarInt/CVarInt):
using Sphere10.Framework;
// VarInt: Variable-length signed integers (more compact for small numbers)
using (var ms = new MemoryStream()) {
VarInt.Write(ms, 300);
ms.Position = 0;
int v = VarInt.Read(ms); // 300
// 300 encoded as 3 bytes instead of 4
}
// CVarInt: Compact unsigned, extreme compression for typical ranges
var bytes = CVarInt.ToBytes(10000); // Few bytes only
var value = CVarInt.From(bytes);
// Typical usage in custom serializers
class CompactSerializer : ItemSerializerBase<int> {
public override void Serialize(ISerializationContext context, int item) {
CVarInt.Write(context.Writer, (ulong)item);
}
public override int Deserialize(IDeserializationContext context) {
return (int)CVarInt.Read(context.Reader);
}
}
Cryptographic Hashing:
using Sphere10.Framework;
var data = "Hello, World!";
// Standard hash functions
byte[] sha256 = Tools.Hashing.SHA256(data);
byte[] sha512 = Tools.Hashing.SHA512(data);
byte[] blake2b = Tools.Hashing.BLAKE2b(data);
byte[] murmurhash = Tools.Hashing.MurmurHash3(data);
// Hash files
byte[] fileHash = Tools.Hashing.SHA256File("path/to/file.bin");
// Compute multiple simultaneously
var hashes = Tools.Hashing.ComputeMultipleHashes(data, CHF.SHA2_256, CHF.SHA3_256);
// Hash files
byte[] fileHash = Tools.Hashing.SHA256File("path/to/file.bin");
// Compute multiple simultaneously
var hashes = Tools.Hashing.ComputeMultipleHashes(data, CHF.SHA2_256, CHF.SHA3_256);
using Sphere10.Framework;
var text = "Hello World";
// Formatting & validation
var padded = text.PadToLength(20); // Pad or truncate to exact length
var truncated = text.Truncate(5); // Truncate with ellipsis
bool isEmpty = text.IsNullOrEmpty();
bool isWhitespace = " ".IsNullOrWhiteSpace();
// Type checking
bool isNumeric = "12345".IsNumeric();
bool isAlpha = "abc".IsAlpha();
bool isAlphaNumeric = "abc123".IsAlphaNumeric();
bool isHex = "DEADBEEF".IsHex();
// Case conversion
var camelCase = "hello_world".ToCamelCase(); // helloWorld
var pascalCase = "hello_world".ToPascalCase(); // HelloWorld
var snakeCase = "HelloWorld".ToSnakeCase(); // hello_world
// Parsing & extraction
var (success, number) = "42".TryParseInt();
var guid = "550e8400-e29b-41d4-a716-446655440000".TryParseGuid();
var words = "The quick brown fox".SplitOnWhitespace();
// Splitting & joining
var lines = "line1\nline2\nline3".ToLines();
var csv = new[] { "a", "b", "c" }.JoinWith(", ");
LevelDB Integration (High-Performance Key-Value Store):
using Sphere10.Framework.Windows.LevelDB;
// Open database
using var db = new DB("./mydata");
// Basic operations
var key = Encoding.UTF8.GetBytes("user:42");
var value = Encoding.UTF8.GetBytes(JsonSerializer.Serialize(user));
db.Put(key, value);
var retrieved = db.Get(key);
if (retrieved != null) {
var restored = JsonSerializer.Deserialize<User>(Encoding.UTF8.GetString(retrieved));
}
// Batch operations (atomic)
using (var batch = db.CreateBatch()) {
for (int i = 0; i < 1000; i++) {
batch.Put(Encoding.UTF8.GetBytes($"key:{i}"), Encoding.UTF8.GetBytes($"value:{i}"));
}
db.Write(batch);
}
// Iteration & range queries
using (var iterator = db.CreateIterator()) {
iterator.SeekToFirst();
while (iterator.IsValid()) {
var k = Encoding.UTF8.GetString(iterator.Key());
var v = Encoding.UTF8.GetString(iterator.Value());
Console.WriteLine($"{k} = {v}");
iterator.Next();
}
}
The library's design encourages extending core abstractions rather than modifying built-in types. Here are the main extension points:
Implement IItemSerializer<T> (or inherit ItemSerializerBase<T>) to define custom serialization logic:
public class UserSerializer : ItemSerializerBase<User> {
private readonly IItemSerializer<string> _stringSerializer;
private readonly IItemSerializer<int> _intSerializer;
public UserSerializer(SerializerFactory factory) {
_stringSerializer = factory.GetSerializer<string>();
_intSerializer = factory.GetSerializer<int>();
}
public override void Serialize(ISerializationContext context, User user) {
_stringSerializer.Serialize(context, user.Name);
_intSerializer.Serialize(context, user.Age);
}
public override User Deserialize(IDeserializationContext context) {
var name = _stringSerializer.Deserialize(context);
var age = _intSerializer.Deserialize(context);
return new User { Name = name, Age = age };
}
}
// Register and use
var factory = new SerializerFactory();
factory.Register(typeof(User), new UserSerializer(factory));
var serialized = factory.GetSerializer<User>().Serialize(user);
Wrap serializers to add cross-cutting concerns (null-handling, encryption, compression, etc.):
// Add null-substitution
var baseSerializer = ItemSerializer<int>.Default;
var nullableSerializer = new WithNullSubstitutionSerializer<int>(
baseSerializer,
defaultValue: -1 // Use -1 when null
);
// Chain decorators
var encrypted = new EncryptedSerializer<MyType>(nullableSerializer, encryptionKey);
var compressed = new CompressedSerializer<MyType>(encrypted);
Implement IProjectionIndex<TItem, TKey> to add custom indexing strategies:
public class LastNameIndex : ProjectionIndexBase<Person, string> {
public LastNameIndex(ObjectStream<Person> objectStream)
: base(objectStream) { }
public override string ProjectKey(Person item) => item.LastName;
protected override void OnIndexAdded(Person item, long index) {
// Store index mapping
}
public override long? TryGetIndex(string lastName) {
// Lookup by last name
return _indexStore.TryGetValue(lastName, out var idx) ? idx : null;
}
}
// Attach to ObjectStream
var objectStream = new ObjectStream<Person>(clusteredStreams, serializer);
var index = new LastNameIndex(objectStream);
objectStream.RegisterIndex(index);
// Query via index
var personIndex = index.TryGetIndex("Smith");
var person = objectStream[personIndex.Value];
Subclass TransactionalScopeBase to implement custom transaction semantics:
public class FileBackedTransactionalScope : TransactionalScopeBase {
private readonly FileStream _logFile;
private List<Operation> _operations = new();
protected override void OnBeginTransaction() {
_operations.Clear();
// Write transaction start marker to log
}
protected override void OnCommitTransaction() {
// Flush all operations to file atomically
_logFile.Write(Encoding.UTF8.GetBytes("[COMMIT]"));
_logFile.Flush();
}
protected override void OnRollbackTransaction() {
_operations.Clear();
// Discard pending operations
}
}
// Use in transactional collections
var dict = new TransactionalDictionary<string, int>();
using (var scope = new FileBackedTransactionalScope()) {
using (var txn = scope.BeginTransaction()) {
dict["key"] = 42;
txn.Commit();
}
}
Decorate existing collections to add custom behavior:
// Add logging to list operations
public class LoggingList<T> : ExtendedListDecorator<T> {
private readonly ILogger _logger;
public LoggingList(IExtendedList<T> inner, ILogger logger) : base(inner) {
_logger = logger;
}
public override void Add(T item) {
_logger.Info($"Adding {item}");
base.Add(item);
}
public override void InsertRange(long index, T[] items) {
_logger.Info($"Inserting {items.Length} items at {index}");
base.InsertRange(index, items);
}
}
// Use transparently
var baseList = new ExtendedList<int>();
IExtendedList<int> logged = new LoggingList<int>(baseList, logger);
logged.Add(42); // "Adding 42" logged
SynchronizedList<T>, SynchronizedDictionary<TKey, TValue>, SynchronizedExtendedList<T>, or wrap with ConcurrentStream where concurrent access is required.SynchronizedRepository<T> and SynchronizedLogger provide synchronized wrappers.ReadRange, UpdateRange, and InsertRange over element-by-element operations for large datasets.using statements.Sphere10 Framework is a mature library that has evolved over multiple years. Core subsystems (collections, serialization, transactions, merkle trees) are stable and production-tested. Some components (post-quantum cryptography, protocol orchestration) may be less battle-tested and should be evaluated carefully for production use.
Sphere10.Framework.Windows - Windows-specific utilitiesSphere10.Framework.Windows.Forms - WinForms integrationSphere10.Framework.Windows.LevelDB - Native LevelDB wrapperSphere10.Framework.Application - Cross-platform application frameworkSphere10.Framework.Communications - Networking and protocol layersSphere10.Framework.Web.AspNetCore - ASP.NET Core integrationSphere10.Framework.CryptoEx - Extended cryptography (ECDSA, ECIES, etc.)Distributed under the MIT NON-AI License.
This license encourages ethical AI development and prevents use in certain AI/ML contexts without explicit permission. See the LICENSE file for full details.
More information: Sphere10 NON-AI-MIT License
Herman Schoenfeld - Software Engineer