The Windows App SDK empowers all Windows Desktop apps with modern Windows UI, APIs, and platform features, including back-compat support. This package contains the Windows Machine Learning APIs.
$ dotnet add package Microsoft.WindowsAppSDK.MLThe Microsoft.WindowsAppSDK.ML package brings Windows ML to your project as part of the Windows
App SDK. To use it correctly, follow these steps:
Windows ML is recommended to be used as a framework-dependent component. This means your app should either:
Microsoft.WindowsAppSDK (recommended). This will automatically include
Microsoft.WindowsAppSDK.ML as a transitive dependency.Microsoft.WindowsAppSDK.ML and
Microsoft.WindowsAppSDK.Runtime.Note: If you only reference
Microsoft.WindowsAppSDK.MLwithoutMicrosoft.WindowsAppSDK.Runtime, your app will not deploy or run correctly.
For applications that need to bundle ONNX Runtime dependencies locally without relying on framework packages, enable self-contained mode:
<PropertyGroup>
<WindowsAppSDKSelfContained>true</WindowsAppSDKSelfContained>
</PropertyGroup>
Tip: If ONNX Runtime binaries are not being copied to your output directory correctly, ensure you have an explicit reference to
Microsoft.WindowsAppSDK.Basein your project, as this package contains the necessary build targets for binary deployment.
In self-contained mode, ONNX Runtime binaries are deployed alongside your application:
MyApp/
├── MyApp.exe
├── Microsoft.Windows.AI.MachineLearning.dll
├── onnxruntime.dll
├── onnxruntime_providers_shared.dll
└── DirectML.dll
This mode is useful for:
If your app is unpackaged (not MSIX), add the following property to your project to enable the Windows App SDK bootstrapper:
<WindowsPackageType>None</WindowsPackageType>
This is a temporary requirement and will be auto-detected in future releases.
Windows ML supports enumerating available execution providers through the PackageExtensionCatalog API. However, this functionality has specific requirements and limitations:
For execution provider enumeration to work in packaged applications, your app manifest must declare the packageQuery capability:
<Package>
<Capabilities>
<uap4:Capability Name="packageQuery" />
</Capabilities>
</Package>
packageQuery capability is not declared in the app manifestFor unpackaged applications running in AppContainer mode, the packageQuery capability must be granted to the process token when creating the AppContainer. This is done by including the capability SID in the AppContainer creation:
// When creating an AppContainer for an unpackaged process
PSID packageQuerySid = nullptr;
ConvertStringSidToSid(L"S-1-15-3-1024-1962849891-688487262-3571417821-3628679630-802580238-1922556387-206211640-3335523193", &packageQuerySid);
SID_AND_ATTRIBUTES capabilities[] = {
{ packageQuerySid, SE_GROUP_ENABLED }
};
// Include the capabilities when creating the AppContainer token
// This allows the unpackaged AppContainer process to use PackageExtensionCatalog APIs
The SID S-1-15-3-1024-1962849891-688487262-3571417821-3628679630-802580238-1922556387-206211640-3335523193 corresponds to the packageQuery capability.
When execution provider enumeration is not available or returns empty results, Windows ML will gracefully fall back to using built-in providers, e.g. the CPU execution provider.
For applications that require specific execution providers, consider brokering and explicitly configuring them rather than relying on automatic enumeration.
The ONNX Runtime headers are included in a winml subdirectory to avoid conflicts with other versions of ONNX Runtime. When using these headers in your code, include them with the subdirectory prefix:
#include <winml/onnxruntime_c_api.h>
#include <winml/onnxruntime_cxx_api.h>
If your existing code uses the headers without the winml/ prefix and you cannot update your includes, you can enable this behavior by setting the WinMLEnableDefaultOrtHeaderIncludePath property in your project:
<PropertyGroup>
<WinMLEnableDefaultOrtHeaderIncludePath>true</WinMLEnableDefaultOrtHeaderIncludePath>
</PropertyGroup>
For redistributed components that are consumed by other applications where ONNX Runtime initialization is handled externally, enable passthrough mode:
<PropertyGroup>
<WindowsAppSDKMLPassthroughOnnxRuntime>true</WindowsAppSDKMLPassthroughOnnxRuntime>
</PropertyGroup>
Passthrough mode is designed for scenarios where:
In passthrough mode, Windows ML assumes ONNX Runtime (onnxruntime.dll) is already loaded in the current process and uses GetModuleHandle to access the already-loaded library instead of attempting its own initialization or dependency resolution.
If your project is consuming Windows ML from a DLL, the Windows App SDK bootstrapping logic is not enabled by default. The expectation is that the application consuming the DLL should enable bootstrapping if needed. In the case that you want to enable it within the DLL itself, you will need to ensure the following is set in your project properties:
<WindowsAppSdkBootstrapInitialize>true</WindowsAppSdkBootstrapInitialize>
The package implements automatic ONNX Runtime initialization through a custom OrtGetApiBase
implementation. This takes care of the initialization details and provides transparent access to the
ONNX Runtime binaries shipping with Windows ML without requiring developers to link against
onnxruntime.lib. See include\WindowsMLAutoInitializer.cpp.
For applications requiring explicit control over initialization sequences or those not utilizing ONNX Runtime APIs, auto-initialization can be disabled via MSBuild property:
<PropertyGroup>
<!-- Disable Windows ML auto-initialization only -->
<DisableWindowsAppSDKMLAutoInitialize>true</DisableWindowsAppSDKMLAutoInitialize>
</PropertyGroup>
For advanced C++ applications where the auto-initializer pattern is not feasible and you need to directly link against the ONNX Runtime import library, you can enable native linking:
<PropertyGroup>
<WindowsMLNativeLinkOnnxRuntime>true</WindowsMLNativeLinkOnnxRuntime>
</PropertyGroup>
⚠️ Important: This is an edge-case scenario for advanced users. Most applications should use the standard auto-initialization approach described above.
When WindowsMLNativeLinkOnnxRuntime is set to true:
DisableWindowsAppSDKMLAutoInitializeonnxruntime.lib for your target platform (x64, ARM64, ARM64EC)#include <winml/onnxruntime_c_api.h> or #include <winml/onnxruntime_cxx_api.h>The consuming binary must ensure the correct onnxruntime.dll is available in the proper search order:
Failure to meet these requirements will result in runtime linking errors or incompatible DLL loading.
This mode is only recommended for:
Build your project as usual. The Windows ML APIs will be available for use.
For C++ applications using CMake without MSBuild integration, the microsoft-windows-ai-machinelearning vcpkg port provides standalone access to OnnxRuntime and the WinMLEpCatalog Flat C API. The vcpkg port has no Windows App SDK dependency. The host process is responsible for any Windows App SDK initialization if needed, though Windows App SDK is not required.
# Install the stable release
vcpkg install microsoft-windows-ai-machinelearning
# Or install experimental/preview builds
vcpkg install microsoft-windows-ai-machinelearning[experimental]
find_package(microsoft-windows-ai-machinelearning CONFIG REQUIRED)
add_executable(MyApp main.cpp)
target_link_libraries(MyApp PRIVATE Microsoft::Windows::AI::MachineLearning)
# Copy runtime DLLs to output directory
winml_copy_runtime_dlls(MyApp)
// WinML Flat C API for execution provider enumeration
#include <WinMLEpCatalog.h>
// OnnxRuntime C/C++ API (no prefix required)
#include <onnxruntime_cxx_api.h>
// Alternative with prefix
#include <winml/onnxruntime_cxx_api.h>
The vcpkg port provides:
WinMLEpCatalog.h, winml/onnxruntime_*.honnxruntime.lib, Microsoft.Windows.AI.MachineLearning.libonnxruntime.dll, onnxruntime_providers_shared.dll, Microsoft.Windows.AI.MachineLearning.dll, DirectML.dllSamples directory or visit
WindowsAppSDK-Samples.