Rystem.PlayFramework helps you to use concepts like multi agent and openai.
$ dotnet add package Rystem.PlayFrameworkOrchestrated AI execution framework with multi-modal support, client-side tools, and advanced planning
Production-ready framework for building AI-powered applications with:
dotnet add package Rystem.PlayFramework
dotnet add package Rystem.PlayFramework.Providers.OpenAI # Or other providers
Supported Providers:
public interface IWeatherService
{
Task<string> GetWeatherAsync(string city);
}
public class WeatherService : IWeatherService
{
public async Task<string> GetWeatherAsync(string city)
{
// Call weather API
return $"The weather in {city} is sunny, 24°C";
}
}
var builder = WebApplication.CreateBuilder(args);
// Register chat client (IChatClient implementation from Microsoft.Extensions.AI)
builder.Services.AddChatClient<OpenAIChatClient>("gpt-4o");
builder.Services.AddPlayFramework("default", pb => pb
// Reference the registered chat client by name
.WithChatClient("gpt-4o")
// Add operational boundaries (prevents hallucinations)
.UseDefaultGuardrails()
// Add scene with service tool
.AddScene("weather", "Get weather information", scene => scene
.WithService<IWeatherService>(s => s
.WithMethod(x => x.GetWeatherAsync(default!), "getWeather", "Get weather for a city")))
// Add cache for conversation state
.AddCache(cache => cache
.WithMemory()
.WithExpiration(TimeSpan.FromMinutes(30)))
// Add custom authorization layer (optional)
.AddAuthorizationLayer<CustomAuthorizationLayer>());
// Register services
builder.Services.AddSingleton<IWeatherService, WeatherService>();
var app = builder.Build();
// Map HTTP endpoints
app.MapPlayFramework(settings =>
{
settings.BasePath = "/api/ai";
settings.RequireAuthentication = false;
});
app.Run();
curl -X POST http://localhost:5158/api/ai/default \
-H "Content-Type: application/json" \
-d '{
"message": "What is the weather in Milan?"
}'
Response (SSE stream):
{"status":"executingScene","sceneName":"weather","message":"Executing weather scene"}
{"status":"completed","message":"The weather in Milan is sunny, 24°C","totalCost":0.0023}
public interface ICalculatorService
{
double Add(double augend, double addend);
double Subtract(double minuend, double subtrahend);
double Multiply(double multiplicand, double multiplier);
double Divide(double dividend, double divisor);
}
builder.Services.AddPlayFramework("default", pb => pb
.WithChatClient("gpt-4o")
.AddScene("calculator", "Perform arithmetic operations", scene => scene
.WithService<ICalculatorService>(s => s
.WithMethod(x => x.Add(default, default), "add", "Add two numbers")
.WithMethod(x => x.Subtract(default, default), "subtract", "Subtract two numbers")
.WithMethod(x => x.Multiply(default, default), "multiply", "Multiply two numbers")
.WithMethod(x => x.Divide(default, default), "divide", "Divide two numbers"))));
Scenes use actors to provide system prompts and context:
.AddScene("assistant", "General purpose assistant", scene => scene
.WithActors(actors => actors
.AddActor("You are a helpful AI assistant. Be concise and accurate."))
.WithService<IAssistantService>(s => s
.WithMethod(x => x.Search(default!), "search", "Search for information")))
Execute tools on browser/mobile (camera, geolocation, file picker):
.AddScene("vision", "Analyze user photos", scene => scene
.OnClient(client => client
.AddTool("capturePhoto", "Take photo from camera")
.AddTool("getCurrentLocation", "Get GPS coordinates")
.AddTool("selectFiles", "Open file picker"))
.WithService<IVisionService>(s => s
.WithMethod(x => x.AnalyzeImage(default!), "analyzeImage", "Analyze an image")))
Client-side implementation (TypeScript):
registry.register("capturePhoto", async () => {
const content = await AIContentConverter.fromCamera();
return [content];
});
See Client Interaction Guide for details.
Control how scenes are selected and executed:
var settings = new SceneRequestSettings
{
ExecutionMode = SceneExecutionMode.Planning, // Direct | Planning | DynamicChaining | Scene
MaxRecursionDepth = 5,
EnableSummarization = true
};
// Use via ISceneManager (programmatic)
await foreach (var response in sceneManager.ExecuteAsync(
"Book a flight to Paris and reserve a hotel",
settings: settings))
{
Console.WriteLine(response.Message);
}
| Mode | Description | Use Case |
|---|---|---|
| Direct | Single scene, no planning | Simple queries, fast responses |
| Planning | Upfront multi-step plan | Known workflows (booking, checkout) |
| DynamicChaining | LLM decides next step live | Exploratory tasks (research, debugging) |
| Scene | Execute specific scene by name | Resuming after client interaction |
builder.Services.AddPlayFramework("default", pb => pb
.WithChatClient("gpt-4o")
.WithPlanning(planning =>
{
planning.MaxRecursionDepth = 5;
planning.Enabled = true;
})
.AddScene(...));
Dynamic chaining is set per-request via SceneRequestSettings.ExecutionMode:
var settings = new SceneRequestSettings
{
ExecutionMode = SceneExecutionMode.DynamicChaining,
MaxDynamicScenes = 10
};
// Via HTTP API (PlayFrameworkRequest)
var request = new PlayFrameworkRequest
{
Message = "Describe this image",
Contents = new List<ContentItem>
{
new()
{
Type = "image",
Data = Convert.ToBase64String(imageBytes),
MediaType = "image/jpeg",
Name = "photo.jpg"
}
}
};
// Via ISceneManager (programmatic)
var input = MultiModalInput.FromImageBytes("Describe this image", imageBytes, "image/jpeg");
await foreach (var response in sceneManager.ExecuteAsync(input))
{
Console.WriteLine(response.Message);
}
new ContentItem
{
Type = "audio",
Data = Convert.ToBase64String(audioBytes),
MediaType = "audio/mp3",
Name = "recording.mp3"
}
new ContentItem
{
Type = "uri",
Uri = "https://example.com/document.pdf",
MediaType = "application/pdf"
}
builder.Services.AddPlayFramework("default", pb => pb
.WithChatClient("gpt-4o")
// Planning
.WithPlanning(planning =>
{
planning.Enabled = true;
planning.MaxRecursionDepth = 5;
})
// Summarization
.WithSummarization(summarization =>
{
summarization.Enabled = true;
summarization.CharacterThreshold = 15_000; // Summarize after 15K characters
summarization.ResponseCountThreshold = 20; // Or after 20 responses
})
// Director (multi-scene orchestration)
.WithDirector(director =>
{
director.Enabled = true;
director.MaxReExecutions = 3;
})
// Cache (for conversation state)
.AddCache(cache => cache
.WithMemory() // In-memory cache
.WithExpiration(TimeSpan.FromMinutes(30)))
// Rate Limiting
.WithRateLimit(rateLimit => rateLimit
.TokenBucket(capacity: 10000, refillRate: 1000)
.GroupBy("userId")
.WaitOnExceeded(TimeSpan.FromSeconds(30)))
// Cost Tracking
.WithCostTracking("USD", inputCostPer1K: 0.01m, outputCostPer1K: 0.03m)
// Guardrails (operational boundaries)
.UseDefaultGuardrails()); // Prevents hallucinations and out-of-scope responses
// OR: .UseCustomGuardrails("You must only answer questions about...");
var settings = new SceneRequestSettings
{
// Execution mode
ExecutionMode = SceneExecutionMode.Planning,
// Planning
MaxRecursionDepth = 5,
MaxDynamicScenes = 10,
// Features
EnableSummarization = true,
EnableDirector = false,
EnableStreaming = true,
// Model overrides
ModelId = "gpt-4o",
Temperature = 0.7f,
MaxTokens = 4096,
// Caching
CacheBehavior = CacheBehavior.Default,
ConversationKey = "user-123-session-1",
// Budget
MaxBudget = 0.50m, // $0.50 max cost
// Scene selection
SceneName = "SpecificScene" // For SceneExecutionMode.Scene
};
// Via HTTP API (PlayFrameworkRequest)
var request = new PlayFrameworkRequest
{
Message = "Your query",
Settings = settings,
Metadata = new Dictionary<string, object>
{
{ "userId", "user-123" },
{ "sessionId", "session-abc" }
}
};
// Via ISceneManager (programmatic)
await foreach (var response in sceneManager.ExecuteAsync(
"Your query",
metadata: new Dictionary<string, object> { { "userId", "user-123" } },
settings: settings))
{
Console.WriteLine(response.Message);
}
Use conversationKey for multi-turn conversations:
var conversationKey = Guid.NewGuid().ToString();
var settings = new SceneRequestSettings { ConversationKey = conversationKey };
// First request
await foreach (var response in sceneManager.ExecuteAsync(
"What's the weather in Paris?", settings: settings))
{
Console.WriteLine(response.Message);
}
// Follow-up request (uses cached context)
await foreach (var response in sceneManager.ExecuteAsync(
"And in London?", settings: settings))
{
Console.WriteLine(response.Message);
}
// LLM remembers Paris context!
In-Memory (single-server):
.AddCache(cache => cache
.WithMemory()
.WithExpiration(TimeSpan.FromMinutes(30)))
Distributed (IDistributedCache — Redis, SQL Server, etc.):
// First register IDistributedCache in DI
builder.Services.AddStackExchangeRedisCache(options =>
options.Configuration = "localhost:6379");
// Then use distributed cache
.AddCache(cache => cache
.WithDistributed()
.WithExpiration(TimeSpan.FromMinutes(60)))
Custom cache implementation:
.AddCache(cache => cache
.WithCustomCache<MyCustomCache>()
.WithExpiration(TimeSpan.FromMinutes(30)))
PlayFramework supports persistent storage of conversations using Rystem Repository Pattern. This enables:
builder.Services.AddPlayFramework("default", pb => pb
.WithChatClient("gpt-4o")
.AddScene(...)
// Enable caching (required for Repository to work)
.AddCache(cache => cache
.WithMemory()
.WithExpiration(TimeSpan.FromMinutes(30)))
// Enable repository persistence
.UseRepository());
public sealed class StoredConversation
{
public required string ConversationKey { get; init; } // Unique ID (primary key)
public string? UserId { get; init; } // Owner (from IAuthorizationLayer)
public bool IsPublic { get; set; } // Public vs Private
public DateTime Timestamp { get; init; } // Created/Updated
public required List<StoredMessage> Messages { get; init; } // Conversation history
public ExecutionState? ExecutionState { get; init; } // Planning/Director state
}
Conversations are automatically saved with:
IAuthorizationLayer.AuthorizeAsync() or settings.UserIdfalse)Private conversations require userId match when loading from repository:
// In SceneManager
if (!storedConversation.IsPublic && storedConversation.UserId != currentUserId)
{
return AiSceneResponse.Unauthorized("Access denied to private conversation");
}
Enable conversation management endpoints:
app.MapPlayFramework("default", settings =>
{
settings.BasePath = "/api/ai";
settings.EnableConversationEndpoints = true; // Enable CRUD endpoints
settings.MaxConversationsPageSize = 100; // Max results per query
});
Available endpoints:
GET /api/ai/default/conversations
Query parameters:
searchText - Filter by message contentincludePublic - Include public conversations (default: true)includePrivate - Include private conversations (default: true)orderBy - Sort order: TimestampDescending | TimestampAscending (default: TimestampDescending)skip - Pagination offset (default: 0)take - Page size (default: 50, max: MaxConversationsPageSize)Example:
curl "http://localhost:5158/api/ai/default/conversations?searchText=weather&orderBy=TimestampDescending&take=20"
Response:
[
{
"conversationKey": "abc-123",
"userId": "user@example.com",
"isPublic": false,
"timestamp": "2025-01-15T10:30:00Z",
"messages": [...],
"executionState": {...}
}
]
GET /api/ai/default/conversations/{conversationKey}
Returns single conversation. Authorization check: private conversations require userId match.
Response:
200 OK - Conversation found and authorized403 Forbidden - Private conversation, unauthorized404 Not Found - Conversation not foundDELETE /api/ai/default/conversations/{conversationKey}
Owner-only operation. Requires userId match.
Response:
204 No Content - Successfully deleted403 Forbidden - Not the owner404 Not Found - Conversation not foundPATCH /api/ai/default/conversations/{conversationKey}/visibility
Request:
{
"isPublic": true
}
Toggles conversation between public/private. Owner-only operation.
Response:
200 OK - Returns updated conversation403 Forbidden - Not the owner404 Not Found - Conversation not foundUse Rystem Repository to build custom queries:
var repository = repositoryFactory.Create("default");
// Find conversations for specific user
var userConversations = await repository
.Where(x => x.UserId == "user@example.com")
.OrderByDescending(x => x.Timestamp)
.Take(50)
.ToListAsEntityAsync();
// Search by message content
var searchResults = await repository
.Where(x => x.Messages.Any(m => m.Text.Contains("weather")))
.ToListAsEntityAsync();
// Public conversations only
var publicConversations = await repository
.Where(x => x.IsPublic)
.OrderByDescending(x => x.Timestamp)
.ToListAsEntityAsync();
PlayFramework uses .UseRepository() (parameterless) to enable persistence. The underlying repository storage is configured separately using Rystem Repository Framework's standard IServiceCollection extensions:
// 1. Enable repository in PlayFramework
builder.Services.AddPlayFramework("default", pb => pb
.WithChatClient("gpt-4o")
.AddScene(...)
.AddCache(cache => cache.WithMemory())
.UseRepository());
// 2. Configure storage backend via Rystem Repository Framework
// Entity Framework Core:
builder.Services.AddRepository<StoredConversation, string>(repo =>
repo.WithEntityFramework<AppDbContext>());
// Or Cosmos DB:
builder.Services.AddRepository<StoredConversation, string>(repo =>
repo.WithCosmosDb("connection-string", "database", "container"));
// Or In-Memory (testing):
builder.Services.AddRepository<StoredConversation, string>(repo =>
repo.WithInMemory());
Use Factory Pattern for tenant-specific repositories:
// Register multiple factories
builder.Services.AddPlayFramework("tenant-a", pb => pb
.WithChatClient("gpt-4o")
.UseRepository());
builder.Services.AddPlayFramework("tenant-b", pb => pb
.WithChatClient("gpt-4o")
.UseRepository());
// Map separate endpoints
app.MapPlayFramework("tenant-a", settings =>
{
settings.BasePath = "/api/ai";
settings.EnableConversationEndpoints = true;
});
app.MapPlayFramework("tenant-b", settings =>
{
settings.BasePath = "/api/ai";
settings.EnableConversationEndpoints = true;
});
Endpoints:
/api/ai/tenant-a - Chat for Tenant A/api/ai/tenant-a/conversations - Conversations for Tenant A/api/ai/tenant-b - Chat for Tenant B/api/ai/tenant-b/conversations - Conversations for Tenant BPlayFramework supports base64-encoded media content in messages. To optimize performance and reduce payload size, the includeContents parameter controls whether to include or exclude media when fetching conversations.
StoredMessage model supports multi-modal content:
public class StoredMessage
{
public string Role { get; set; } // user | assistant | system | tool
public string? Text { get; set; } // Text content
public List<AIContent>? Contents { get; set; } // Multi-modal attachments
}
public class AIContent
{
public string Type { get; set; } // "text" | "data"
public string? Text { get; set; } // For type="text"
public string? Data { get; set; } // Base64 encoded (for type="data")
public string? MediaType { get; set; } // MIME type: image/jpeg, audio/mp3, application/pdf
}
By default, Contents are excluded from list operations to reduce bandwidth and improve performance. Use includeContents=true when you need to display media.
List conversations WITHOUT media (faster, smaller payload):
GET /api/ai/default/conversations?includeContents=false&take=50
Get single conversation WITH media (for display):
GET /api/ai/default/conversations/abc-123?includeContents=true
Response without contents:
{
"conversationKey": "abc-123",
"userId": "user@example.com",
"isPublic": false,
"timestamp": "2025-01-15T10:30:00Z",
"messages": [
{
"role": "user",
"text": "Analyze this image",
"contents": null // ← Excluded to reduce payload
}
]
}
Response with contents:
{
"conversationKey": "abc-123",
"messages": [
{
"role": "user",
"text": "Analyze this image",
"contents": [
{
"type": "data",
"data": "iVBORw0KGgoAAAANSUhEUgAA...", // ← Base64 image
"mediaType": "image/jpeg"
}
]
},
{
"role": "assistant",
"text": "The image shows a sunset over mountains.",
"contents": null
}
]
}
PlayFramework uses Repository Framework metadata to control content inclusion:
var result = await queryBuilder
.Skip(parameters.Skip)
.Take(pageSize)
.AddMetadata(nameof(parameters.IncludeContents), parameters.IncludeContents.ToString())
.ToListAsEntityAsync();
Backend automatically excludes Contents when includeContents=false:
if (!includeContents)
{
foreach (var message in conversation.Messages)
{
message.Contents = null; // Reduce JSON payload size
}
}
| Operation | includeContents | Reason |
|---|---|---|
| List conversations | false (default) | Faster, smaller payload - only need titles/previews |
| Load single conversation for display | true | Need to render images/PDFs/audio in UI |
| Search conversations | false | Only searching text content |
| Export conversation | true | Full conversation with attachments |
Without contents (includeContents=false):
With contents (includeContents=true):
Best Practice: Always use includeContents=false for list operations, true only when loading a single conversation for display.
For TypeScript/React clients, see PlayFramework TypeScript Client README for:
ContentUrlConverter helper (Base64 → Blob URL conversion)userId for ownershipExample flow:
public class CustomAuthorizationLayer : IAuthorizationLayer
{
public async Task<AuthorizationResult> AuthorizeAsync(
SceneContext context,
SceneRequestSettings settings,
CancellationToken cancellationToken)
{
// Extract userId from JWT claims (set by HTTP middleware)
if (!context.Metadata.TryGetValue("userId", out var userId))
{
return new AuthorizationResult
{
IsAuthorized = false,
Reason = "User ID not found"
};
}
// Return userId for conversation ownership
return new AuthorizationResult
{
IsAuthorized = true,
UserId = userId.ToString()
};
}
}
Middleware to extract userId:
app.Use(async (context, next) =>
{
if (context.User.Identity?.IsAuthenticated == true)
{
var userId = context.User.Identity.Name;
// Will be available in PlayFramework via context.Metadata["userId"]
}
await next();
});
See Repository Pattern Documentation for more storage options.
Use multiple LLM providers for reliability.
Step 1: Register chat clients in DI:
// Register IChatClient implementations
builder.Services.AddChatClient<OpenAIPrimaryChatClient>("openai-primary");
builder.Services.AddChatClient<OpenAISecondaryChatClient>("openai-secondary");
builder.Services.AddChatClient<AzureBackupChatClient>("azure-backup");
builder.Services.AddChatClient<ClaudeFallbackChatClient>("claude-fallback");
Step 2: Configure in PlayFramework builder:
builder.Services.AddPlayFramework("default", pb => pb
// Primary pool (load-balanced)
.WithChatClient("openai-primary")
.WithChatClient("openai-secondary")
.WithChatClient("azure-backup")
.WithLoadBalancingMode(LoadBalancingMode.RoundRobin) // None | Sequential | RoundRobin | Random
// Fallback chain (if primary pool fails)
.WithChatClientAsFallback("claude-fallback")
.WithFallbackMode(FallbackMode.Sequential) // Sequential | RoundRobin | Random
// Retry policy
.WithRetryPolicy(maxAttempts: 3, baseDelaySeconds: 1.0)
.AddScene(...));
Token Bucket strategy (recommended):
.WithRateLimit(rateLimit => rateLimit
.TokenBucket(capacity: 10000, refillRate: 1000) // 10K capacity, refill 1K/interval
.GroupBy("userId") // Per-user rate limiting
.WaitOnExceeded(TimeSpan.FromSeconds(30))) // Wait up to 30s when rate exceeded
Sliding Window:
.WithRateLimit(rateLimit => rateLimit
.SlidingWindow(maxRequests: 60, interval: TimeSpan.FromMinutes(1))
.RejectOnExceeded()) // Reject immediately
Fixed Window:
.WithRateLimit(rateLimit => rateLimit
.FixedWindow(maxRequests: 100, interval: TimeSpan.FromHours(1)))
Concurrent:
.WithRateLimit(rateLimit => rateLimit
.Concurrent(maxConcurrent: 5)) // Max 5 concurrent requests
builder.Services.AddLogging(logging =>
{
logging.AddConsole();
logging.AddApplicationInsights();
logging.SetMinimumLevel(LogLevel.Information);
});
Log Output:
[13:45:23 INF] 🎯 Trying LoadBalanced client: openai-primary (Attempt 1/3)
[13:45:24 INF] ✅ LoadBalanced client openai-primary succeeded (Tokens: 150→89, Cost: $0.0023)
[13:45:24 INF] ⚙️ Executing tool: GetWeatherAsync (City: Milan)
[13:45:24 INF] ✅ Scene 'weather' completed (Cost: $0.0023, Tokens: 239)
builder.Services.AddOpenTelemetry()
.WithMetrics(metrics => metrics
.AddMeter("Rystem.PlayFramework"))
.WithTracing(tracing => tracing
.AddSource("Rystem.PlayFramework"));
Metrics:
playframework.request.durationplayframework.request.costplayframework.request.tokensplayframework.scene.executionsPlayFramework supports two levels of authorization:
IAuthorizationLayer (user permissions, quotas, feature flags)Apply standard ASP.NET Core authorization at the HTTP endpoint level:
builder.Services.AddAuthorization(options =>
{
options.AddPolicy("Authenticated", policy =>
policy.RequireAuthenticatedUser());
options.AddPolicy("PlayFrameworkAccess", policy =>
policy.RequireClaim("feature", "ai"));
options.AddPolicy("PremiumUser", policy =>
policy.RequireClaim("subscription", "premium"));
});
app.MapPlayFramework(settings =>
{
settings.BasePath = "/api/ai";
settings.RequireAuthentication = true;
settings.AuthorizationPolicies = new List<string>
{
"Authenticated",
"PlayFrameworkAccess"
};
});
When to use: Token validation, JWT claims, role-based access, rate limiting policies.
Implement custom authorization logic that runs after initialization but before scene execution:
public class CustomAuthorizationLayer : IAuthorizationLayer
{
private readonly IUserService _userService;
private readonly ILogger<CustomAuthorizationLayer> _logger;
public CustomAuthorizationLayer(
IUserService userService,
ILogger<CustomAuthorizationLayer> logger)
{
_userService = userService;
_logger = logger;
}
public async Task<AuthorizationResult> AuthorizeAsync(
SceneContext context,
SceneRequestSettings settings,
CancellationToken cancellationToken)
{
// Extract userId from metadata
if (!context.Metadata.TryGetValue("userId", out var userIdObj) || userIdObj is not string userId)
{
return new AuthorizationResult
{
IsAuthorized = false,
Reason = "User ID not found in request metadata"
};
}
// Check user quota
var user = await _userService.GetUserAsync(userId, cancellationToken);
if (user.MonthlyQuota <= 0)
{
_logger.LogWarning("User {UserId} exceeded monthly quota", userId);
return new AuthorizationResult
{
IsAuthorized = false,
Reason = $"Monthly quota exceeded. Resets on {user.QuotaResetDate:yyyy-MM-dd}"
};
}
// Check feature flag for specific scene
if (settings.SceneName == "PremiumScene" && !user.HasFeature("premium-scenes"))
{
return new AuthorizationResult
{
IsAuthorized = false,
Reason = "Premium subscription required for this feature"
};
}
// Check budget limits
if (settings.MaxBudget.HasValue && settings.MaxBudget.Value > user.MaxBudgetPerRequest)
{
return new AuthorizationResult
{
IsAuthorized = false,
Reason = $"Requested budget ${settings.MaxBudget.Value} exceeds user limit ${user.MaxBudgetPerRequest}"
};
}
// All checks passed
_logger.LogInformation("User {UserId} authorized (Quota: {Quota}, Features: {Features})",
userId, user.MonthlyQuota, string.Join(", ", user.Features));
return new AuthorizationResult
{
IsAuthorized = true
};
}
}
Register in PlayFramework:
builder.Services.AddPlayFramework("default", pb => pb
.WithChatClient("gpt-4o")
.AddScene(...)
.AddAuthorizationLayer<CustomAuthorizationLayer>());
// Register dependencies
builder.Services.AddSingleton<IUserService, UserService>();
Response when authorization fails:
{
"status": "error",
"errorMessage": "Authorization failed: Monthly quota exceeded. Resets on 2025-03-01",
"message": "You are not authorized to perform this action."
}
When to use:
Execution flow:
// 1. HTTP Endpoint Authorization (ASP.NET Core)
builder.Services.AddAuthorization(options =>
{
options.AddPolicy("Authenticated", policy =>
policy.RequireAuthenticatedUser());
});
app.MapPlayFramework(settings =>
{
settings.BasePath = "/api/ai";
settings.RequireAuthentication = true;
settings.AuthorizationPolicies = new List<string> { "Authenticated" };
});
// 2. Business Logic Authorization (IAuthorizationLayer)
builder.Services.AddPlayFramework("default", pb => pb
.WithChatClient("gpt-4o")
.AddAuthorizationLayer<CustomAuthorizationLayer>());
Both must pass for execution to proceed:
See AUTHORIZATION_EXAMPLE.md for comprehensive examples.
Guardrails prevent the AI from hallucinating or responding to requests outside the system's capabilities. When enabled, a system prompt is automatically added at the beginning of every new conversation to define what the AI can and cannot do.
Without guardrails, LLMs may:
With guardrails, the AI:
The default prompt (~150 tokens) instructs the AI to operate strictly within available capabilities:
builder.Services.AddPlayFramework("default", pb => pb
.WithChatClient("gpt-4o")
.UseDefaultGuardrails() // Adds default operational boundaries
.AddScene("weather", "Get weather information", scene => scene
.WithService<IWeatherService>(s => s
.WithMethod(x => x.GetWeatherAsync(default!), "getWeather", "Get weather for a city")))
.AddScene("calculator", "Perform calculations", scene => scene
.WithService<ICalculatorService>(s => s
.WithMethod(x => x.Add(default, default), "add", "Add two numbers"))));
Default prompt:
You are a PlayFramework AI orchestrator. You can ONLY respond using:
- Available Scenes (specialized handlers for specific tasks)
- Available Actors (context providers and data enrichers)
- Available Tools (functions you can call)
RULES:
1. Use ONLY the scenes, actors, and tools explicitly registered in this system
2. If a request is outside available capabilities, explain what you CAN do instead
3. When selecting a scene, match user intent to scene purpose
4. When calling tools, use exact function signatures provided
5. Stay within the context provided by main actors and system context
6. Do NOT invent capabilities, hallucinate tools, or reference external systems
If unsure, ask for clarification within your available capabilities.
Define your own boundaries for domain-specific applications:
builder.Services.AddPlayFramework("default", pb => pb
.WithChatClient("gpt-4o")
.UseCustomGuardrails(@"
You are an AI assistant for the XYZ Corporation customer support system.
CAPABILITIES:
- Check order status (GetOrderStatus tool)
- Process returns (InitiateReturn tool)
- Answer product questions using product database
RESTRICTIONS:
- Do NOT discuss pricing, discounts, or promotions (direct to sales team)
- Do NOT process refunds over $500 (escalate to manager)
- Do NOT share customer data from other accounts
- Stay professional and empathetic
If a request is outside these capabilities, politely explain what you CAN help with.
")
.AddScene("orders", "Manage customer orders", scene => scene
.WithService<IOrderService>(s => s
.WithMethod(x => x.GetOrderStatus(default!), "getOrderStatus", "Check order status")
.WithMethod(x => x.InitiateReturn(default!), "initiateReturn", "Start a return"))));
| Scenario | Recommendation |
|---|---|
| General-purpose chatbot | ✅ Use default guardrails |
| Domain-specific app (e.g., HR, finance, healthcare) | ✅ Use custom guardrails with strict policies |
| Open exploration (research assistant, creative writing) | ⚠️ Consider disabling (but monitor for misuse) |
| Production systems | ✅ Always enable (default or custom) |
Note: Guardrails are only added for new conversations (not when resuming from cache).
Without Guardrails:
User: "Can you book a flight to Paris?"
AI: "Sure! I'll use the FlightBookingAPI to search for flights..." ❌ (invented tool)
With Default Guardrails:
User: "Can you book a flight to Paris?"
AI: "I don't have a flight booking capability. I can help with: weather information and calculations. Would you like to check the weather in Paris instead?" ✅
With Custom Guardrails (customer support):
User: "Can you give me a 50% discount?"
AI: "I'm not able to discuss pricing or discounts. Please contact our sales team at sales@xyz.com for promotional offers." ✅
IAuthorizationLayer for user-specific restrictionsPOST /api/ai/{factoryName}
Request:
{
"message": "What's the weather in Milan?",
"settings": {
"executionMode": "Direct",
"maxBudget": 0.10
}
}
Response (text/event-stream):
data: {"status":"executingScene","sceneName":"weather","message":"Calling weather API"}
data: {"status":"streaming","streamingChunk":"The weather"}
data: {"status":"streaming","streamingChunk":" in Milan"}
data: {"status":"completed","message":"The weather in Milan is sunny, 24°C"}
POST /api/ai/{factoryName}/streaming
Returns individual text chunks as they're generated.
Set enableStreaming: false in settings to get full response at once.
[Fact]
public async Task Scene_Should_Execute_Successfully()
{
// Arrange
var services = new ServiceCollection();
services.AddChatClient<MockChatClient>("mock");
services.AddPlayFramework("test", pb => pb
.WithChatClient("mock")
.AddScene("weather", "Get weather", scene => scene
.WithService<IWeatherService>(s => s
.WithMethod(x => x.GetWeatherAsync(default!), "getWeather", "Get weather for a city"))));
services.AddSingleton<IWeatherService, WeatherService>();
var provider = services.BuildServiceProvider();
var factory = provider.GetRequiredService<IPlayFramework>();
var sceneManager = factory.Create("test")!;
// Act
AiSceneResponse? lastResponse = null;
await foreach (var response in sceneManager.ExecuteAsync("Weather in Milan?"))
{
lastResponse = response;
}
// Assert
Assert.NotNull(lastResponse);
Assert.Equal(AiResponseStatus.Completed, lastResponse.Status);
Assert.Contains("Sunny", lastResponse.Message);
}
public interface IProductService
{
Task<List<Product>> SearchProductsAsync(string query);
Task<Product> GetProductDetailsAsync(int productId);
}
public interface ICartService
{
Task AddToCartAsync(int productId, int quantity);
Task<Cart> GetCartAsync(string userId);
}
builder.Services.AddChatClient<OpenAIChatClient>("gpt-4o");
builder.Services.AddPlayFramework("shop", pb => pb
.WithChatClient("gpt-4o")
// Search scene
.AddScene("product-search", "Search for products", scene => scene
.WithActors(actors => actors
.AddActor("You are a helpful shopping assistant. Help users find products."))
.WithService<IProductService>(s => s
.WithMethod(x => x.SearchProductsAsync(default!), "searchProducts", "Search for products")
.WithMethod(x => x.GetProductDetailsAsync(default), "getProductDetails", "Get product details")))
// Cart scene
.AddScene("cart-management", "Manage shopping cart", scene => scene
.WithService<ICartService>(s => s
.WithMethod(x => x.AddToCartAsync(default, default), "addToCart", "Add product to cart")
.WithMethod(x => x.GetCartAsync(default!), "getCart", "Get user cart")))
// Confirmation scene (client-side)
.AddScene("checkout", "Complete purchase", scene => scene
.OnClient(client => client
.AddTool("getUserConfirmation", "Ask user to confirm purchase"))
.WithService<ICheckoutService>(s => s
.WithMethod(x => x.ProcessPaymentAsync(default!), "processPayment", "Process payment")))
// Enable planning for multi-step workflows
.WithPlanning(planning =>
{
planning.Enabled = true;
planning.MaxRecursionDepth = 5;
})
// Cache for conversation state
.AddCache(cache => cache
.WithMemory()
.WithExpiration(TimeSpan.FromMinutes(30)))
// Rate limiting
.WithRateLimit(rateLimit => rateLimit
.TokenBucket(capacity: 5000, refillRate: 500)
.GroupBy("userId")
.WaitOnExceeded(TimeSpan.FromSeconds(10))));
app.MapPlayFramework("shop", settings =>
{
settings.BasePath = "/api/shop";
settings.RequireAuthentication = true;
});
Usage:
curl -X POST http://localhost:5158/api/shop/shop \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{
"message": "Find me a red t-shirt, size M, add to cart and checkout",
"settings": {
"executionMode": "Planning",
"conversationKey": "user-123-session-1"
}
}'
Flow:
SearchProductsAsync("red t-shirt")AddToCartAsync(productId, quantity)ProcessPaymentAsyncPersist conversation memory across sessions with automatic summarization:
builder.Services.AddPlayFramework("default", pb => pb
.WithChatClient("gpt-4o")
.WithMemory(memory => memory
.WithDefaultMemoryStorage("userId", "tenantId") // Storage keys
.WithMaxSummaryLength(2000) // Max chars for summary
.WithSystemPrompt("Summarize the key points of this conversation")
.WithIncludePreviousMemory(true)) // Include prior memory in context
.AddScene(...));
Custom storage:
.WithMemory(memory => memory
.WithCustomStorage<RedisMemoryStorage>()
.WithCustomMemory<CustomMemorySummarizer>())
Add vector search context to scenes:
// Register RAG service
builder.Services.AddRagService<AzureSearchRagService>();
// Global RAG
builder.Services.AddPlayFramework("default", pb => pb
.WithChatClient("gpt-4o")
.WithRag(rag =>
{
rag.TopK = 10;
rag.SearchAlgorithm = VectorSearchAlgorithm.CosineSimilarity;
rag.MinimumScore = 0.7;
})
.AddScene(...));
// Per-scene RAG (override or disable)
.AddScene("search", "Search documents", scene => scene
.WithRag(rag => { rag.TopK = 5; }) // Override for this scene
.WithService<ISearchService>(...))
.AddScene("calculator", "Math only", scene => scene
.WithoutRag()) // Disable RAG for this scene
Add real-time web search to scenes:
// Register web search service
builder.Services.AddWebSearchService<BingWebSearchService>();
builder.Services.AddPlayFramework("default", pb => pb
.WithChatClient("gpt-4o")
.WithWebSearch(ws =>
{
ws.MaxResults = 10;
ws.SafeSearch = true;
ws.Market = "en-US";
ws.Freshness = WebSearchFreshness.Week;
})
.AddScene(...));
// Per-scene web search
.AddScene("news", "Get latest news", scene => scene
.WithWebSearch(ws => { ws.Freshness = WebSearchFreshness.Day; }))
.AddScene("math", "Calculations", scene => scene
.WithoutWebSearch())
Connect to Model Context Protocol servers:
.AddScene("dev-tools", "Development tools", scene => scene
.WithMcpServer("mcp-server-name")
.WithService<IDevService>(...))
Add global system prompts that apply to all scenes:
builder.Services.AddPlayFramework("default", pb => pb
.WithChatClient("gpt-4o")
// Static system prompt
.AddMainActor("You are a professional assistant for Acme Corp.")
// Dynamic prompt (from context)
.AddMainActor(context =>
$"Current user: {context.Metadata.GetValueOrDefault("userName")}")
// Async dynamic prompt
.AddMainActor(async (context, ct) =>
{
var userService = context.ServiceProvider.GetRequiredService<IUserService>();
var user = await userService.GetUserAsync(context.Metadata["userId"].ToString()!, ct);
return $"User preferences: {user.Preferences}";
}, cacheForSubsequentCalls: true) // Cache after first call
// Custom IActor implementation
.AddMainActor<CustomActorService>()
.AddScene(...));
Access scene managers via the factory pattern:
// Inject IPlayFramework
public class MyService(IPlayFramework playFramework)
{
public async Task ProcessAsync(string message)
{
// Create scene manager for a specific factory
var sceneManager = playFramework.Create("default");
if (sceneManager is null) return;
await foreach (var response in sceneManager.ExecuteAsync(message))
{
Console.WriteLine(response.Message);
}
}
// Check if factory exists
public bool IsAvailable(string name) => playFramework.Exists(name);
}
Programmatic multi-modal content creation:
// Text only
var input = MultiModalInput.FromText("Hello!");
// Image from URL
var input = MultiModalInput.FromImageUrl("Describe this", "https://example.com/photo.jpg");
// Image from bytes
var input = MultiModalInput.FromImageBytes("Analyze this photo", imageBytes, "image/png");
// Audio
var input = MultiModalInput.FromAudioUrl("Transcribe this", "https://example.com/audio.mp3");
var input = MultiModalInput.FromAudioBytes("Transcribe this", audioBytes, "audio/mp3");
// File (PDF, etc.)
var input = MultiModalInput.FromFileUrl("Summarize this document", "https://example.com/doc.pdf");
var input = MultiModalInput.FromFileBytes("Summarize this", pdfBytes, "application/pdf");
// Use with ISceneManager
await foreach (var response in sceneManager.ExecuteAsync(input))
{
Console.WriteLine(response.Message);
// Check for multi-modal response content
if (response.HasImage)
{
var image = response.GetImage(); // DataContent
}
if (response.HasAudio)
{
var audio = response.GetAudio(); // DataContent
}
// Get all images/audio/files
foreach (var img in response.GetAllImages()) { /* ... */ }
}
Commands are fire-and-forget operations (unlike tools which are bidirectional):
.AddScene("navigation", "Navigate the app", scene => scene
.OnClient(client => client
// Bidirectional tool (waits for client response)
.AddTool("getUserInput", "Ask user for input")
// Strongly-typed tool (auto-generates JSON schema from T)
.AddTool<LocationArgs>("getLocation", "Get GPS coordinates")
// Fire-and-forget command
.AddCommand("navigateTo", "Navigate to a page")
// Command with feedback mode
.AddCommand("playSound", "Play notification sound",
feedbackMode: CommandFeedbackMode.OnError, // Never | OnError | Always
timeoutSeconds: 10)))
All possible AiResponseStatus values:
| Status | Description |
|---|---|
Completed | Execution completed successfully |
Streaming | Streaming chunk in progress |
ExecutingScene | Currently executing a scene |
ExecutingTool | Calling a server-side tool |
AwaitingClient | Waiting for client-side tool response |
CommandClient | Fire-and-forget command sent to client |
Planning | Creating execution plan |
ExecutingPlan | Executing a plan step |
Summarizing | Summarizing conversation |
Directing | Director evaluating results |
BudgetExceeded | Request exceeded max budget |
Error | An error occurred |
Unauthorized | Authorization failed |
RateLimited | Rate limit exceeded |
Cached | Response served from cache |
Override default implementations:
builder.Services.AddPlayFramework("default", pb => pb
.WithChatClient("gpt-4o")
// Custom planner (instead of DeterministicPlanner)
.AddCustomPlanner<MyPlanner>()
// Custom summarizer
.AddCustomSummarizer<MySummarizer>()
// Custom director
.AddCustomDirector<MyDirector>()
// Custom JSON serialization
.AddCustomJsonService<MyJsonService>()
// Or with factory
.AddCustomJsonService(sp => new MyJsonService(sp.GetRequiredService<IOptions<JsonOptions>>()))
// Custom context injection
.AddContext<MyContextProvider>()
// Default execution mode
.WithExecutionMode(SceneExecutionMode.Planning)
.AddScene(...));
Fine-grained cost tracking:
builder.Services.AddPlayFramework("default", pb => pb
.WithChatClient("gpt-4o")
.WithChatClient("gpt-4o-mini")
// Global cost tracking
.WithCostTracking("USD", inputCostPer1K: 0.01m, outputCostPer1K: 0.03m)
// Per-model pricing
.WithModelCosts("gpt-4o", inputCostPer1K: 0.005m, outputCostPer1K: 0.015m)
.WithModelCosts("gpt-4o-mini", inputCostPer1K: 0.00015m, outputCostPer1K: 0.0006m)
// Per-client pricing (overrides model costs)
.WithClientCosts("azure-client", inputCostPer1K: 0.004m, outputCostPer1K: 0.012m)
.AddScene(...));
Fine-grained observability control:
// Full control
.WithTelemetry(telemetry =>
{
telemetry.EnableTracing = true;
telemetry.EnableMetrics = true;
telemetry.TraceScenes = true;
telemetry.TraceTools = true;
telemetry.TraceLlmCalls = true;
telemetry.IncludeLlmPrompts = false; // Don't log prompts (security)
telemetry.IncludeLlmResponses = false; // Don't log responses (security)
telemetry.TracePlanning = true;
telemetry.TraceSummarization = true;
telemetry.SamplingRate = 0.5; // 50% sampling
})
// Shortcuts
.WithTracing(samplingRate: 1.0) // Enable tracing only
.WithMetrics() // Enable metrics only
.WithoutTelemetry() // Disable all telemetry
Contributions welcome! Please open an issue or PR.
MIT © Alessandro Rapiti
Made with ❤️ by the Rystem team