Azure Table Storage made simple for .NET. Entity Framework-style data access with automatic batching, retry logic, and type-safe operations. 95% cheaper than other NoSQL solutions.
$ dotnet add package PartiTablesAzure Table Storage made simple for .NET
Azure Table Storage is 95% cheaper than other NoSQL solutions and blazing fast — but painful to use. PartiTables fixes that by providing clean, Entity Framework-style data access patterns.
Save Money - $10/month instead of $240/month
Type-Safe - IntelliSense, compile-time checking
Auto-Retry - Built-in resilience with Polly
Batch Operations - Save multiple entities atomically
Less Code - One class replaces hundreds of lines
dotnet add package PartiTables
Without PartiTables:
// The old way: manual TableEntity manipulation
var entity = new TableEntity("partition", "row-key");
entity["FirstName"] = "John";
entity["LastName"] = "Doe";
await tableClient.UpsertEntityAsync(entity);
// ... tedious manual parsing when reading
// ... complex batch operations
// ... manual retry logic
With PartiTables:
// Use them like Entity Framework
var patient = new Patient { PatientId = "patient-123" };
patient.Meta.Add(new PatientMeta { FirstName = "John", Email = "john@example.com" });
patient.Consents.Add(new Consent { Type = "DataSharing" });
await repo.SaveAsync(patient);
// ✅ All related records saved in ONE batch operation
// ✅ Automatic retry with Polly resilience
// ✅ RowKeys auto-generated from patterns
// ✅ Strong typing, IntelliSense, compile-time safety
[TablePartition("Customers", "{CustomerId}")]
public class Customer
{
public string CustomerId { get; set; }
[RowKeyPrefix("")]
public List<Order> Orders { get; set; } = new();
[RowKeyPrefix("")]
public List<Address> Addresses { get; set; } = new();
}
[RowKeyPattern("{CustomerId}-order-{OrderId}")]
public class Order : RowEntity
{
public string OrderId { get; set; } = Guid.NewGuid().ToString("N")[..8];
public decimal Amount { get; set; }
public string Status { get; set; } = "Pending";
}
[RowKeyPattern("{CustomerId}-address-{AddressId}")]
public class Address : RowEntity
{
public string AddressId { get; set; } = Guid.NewGuid().ToString("N")[..8];
public string City { get; set; } = default!;
}
Benefits:
// Create
var customer = new Customer { CustomerId = "cust-123" };
customer.Orders.Add(new Order { Amount = 99.99m, Status = "Pending" });
await repo.SaveAsync(customer);
// Read
var customer = await repo.FindAsync("cust-123");
var orders = await repo.QueryCollectionAsync("cust-123", c => c.Orders);
// Update
loaded.Orders[0].Status = "Shipped";
await repo.SaveAsync(loaded);
// Delete
await repo.DeleteAsync("cust-123");
var customer = new Customer { CustomerId = "cust-123" };
customer.Orders.Add(new Order { Amount = 99.99m });
customer.Orders.Add(new Order { Amount = 45.50m });
customer.Addresses.Add(new Address { City = "Seattle" });
await repo.SaveAsync(customer);
// ✅ All entities saved in ONE atomic batch operation
// ✅ Either all succeed or all fail (within partition)
// ✅ Up to 100 operations per batch
// ✅ Automatic grouping by partition key
// Automatic retry on transient failures
await repo.SaveAsync(customer);
// ✅ Retries on network errors
// ✅ Exponential backoff
// ✅ Circuit breaker protection
You don't need to:
PartiTables automatically handles datasets larger than Azure's 100-item batch limit with automatic rollback if any batch fails:
// Save 10,000 records across 100 batches
var salesData = GenerateSalesData("store-001", 10_000);
await repo.SaveAsync(salesData);
// ✅ Automatically split into 100-item batches
// ✅ If ANY batch fails, ALL previous batches are rolled back
// ✅ Your data stays consistent - it's all-or-nothing!
What happens:
Benefits:
// Get all data for a partition
var allCustomerData = await repo.FindAsync("customer-123");
// Get specific collection (more efficient)
var orders = await repo.QueryCollectionAsync("customer-123", c => c.Orders);
// Prefix-based queries
var orders2024 = await client.QueryByPrefixAsync("customer-123", "order-2024-");
// Load and filter in memory (for small datasets)
var customer = await repo.FindAsync("customer-123");
var pendingOrders = customer.Orders.Where(o => o.Status == "Pending").ToList();
// For larger datasets, query specific collection first
var allOrders = await repo.QueryCollectionAsync("customer-123", c => c.Orders);
var shipped = allOrders.Where(o => o.Status == "Shipped").ToList();
PartiTables transforms your object graph into optimized Table Storage entities:
// Your code
var patient = new Patient { PatientId = "patient-123" };
patient.Meta.Add(new PatientMeta { FirstName = "John", LastName = "Doe" });
patient.Consents.Add(new Consent { Type = "DataSharing" });
patient.Devices.Add(new DeviceLink { DeviceId = "device-001" });
await repo.SaveAsync(patient);
What happens behind the scenes:
✅ Batch Transaction to Table Storage
PartitionKey: clinic-001
patient-123-meta (Auto-generated from pattern)
patient-123-consent-a7b8 (Auto-generated from pattern)
patient-123-device-001 (Auto-generated from pattern)
✅ All saved atomically in one batch
✅ Automatic retry on failure
✅ Optimistic concurrency handled
Single Partition (Fast Queries):
PartitionKey: customer-123
RowKeys:
customer-123-order-001
customer-123-order-002
customer-123-address-001
Multi-Tenant Isolation:
PartitionKey: tenant-789
RowKeys:
tenant-789-user-001
tenant-789-user-002
tenant-789-setting-001
📱 Multi-tenant SaaS | 🛒 Customer orders & history | 🏥 Healthcare records
📋 Audit logs | 👤 User profiles | 🌐 IoT device data
📊 Time-series metrics | 🔔 Notification queues | 📦 Inventory tracking
🎫 Event ticketing | 💬 Chat history | 🔐 Session management
cd PartiSample
dotnet run
Choose from 6 interactive demos showing real-world scenarios.
MIT License — © 2025 PartiTech
Contributions are welcome! Please feel free to submit a Pull Request.