13 packages tagged with “pose-estimation”
Use YOLOv8 in real-time for object detection, instance segmentation, pose estimation and image classification, via ONNX Runtime
Use YOLO11 in real-time for object detection tasks, with edge performance, powered by ONNX-Runtime.
YoloDotNet is a modular, lightweight C# library for high-performance, real-time computer vision and YOLO-based inference in .NET. Built on .NET 8 and powered by ONNX Runtime, YoloDotNet provides explicit, production-ready inference for modern YOLO model families, including YOLOv5u through YOLOv26, YOLO-World, YOLO-E, and RT-DETR. The library features a fully modular execution architecture with pluggable execution providers for CPU, CUDA / TensorRT, Intel OpenVINO, Apple CoreML, and DirectML, enabling predictable deployment across Windows, Linux, and macOS. YoloDotNet intentionally avoids heavy computer vision frameworks such as OpenCV. Image handling and preprocessing are performed using SkiaSharp, with no Python runtime, no hidden preprocessing, and no implicit behavior. Designed for low-latency inference and long-running workloads, YoloDotNet gives developers full control over execution, memory usage, and preprocessing — allowing you to choose the hardware, platform, and execution backend without unnecessary abstraction or overhead.
YoloDotNet.ExecutionProvider.CoreML enables hardware-accelerated inference on macOS using Apple’s native Core ML framework. This provider integrates ONNX Runtime’s CoreML execution backend, allowing models to run efficiently on Apple Silicon by leveraging system-level ML acceleration. No external runtimes, drivers, or SDKs are required beyond a supported version of macOS. Designed for YoloDotNet’s modular, execution-provider-agnostic architecture, the CoreML provider integrates cleanly with the core library and exposes predictable, explicit inference behavior. It is well-suited for macOS applications that require fast, low-power, native inference using Apple’s ML stack without additional dependencies.
YoloDotNet.ExecutionProvider.Cpu provides a fully portable CPU-based execution provider for YoloDotNet using ONNX Runtime’s built-in CPU backend. This execution provider requires no additional system-level dependencies and works out of the box on Windows, Linux, and macOS. It is ideal for development, testing, CI environments, and production scenarios where GPU or NPU acceleration is unavailable. The CPU provider integrates seamlessly with YoloDotNet’s modular execution provider architecture introduced in v4.0 and supports all inference tasks including object detection, segmentation, classification, pose estimation, and OBB detection.
CUDA and TensorRT execution provider for YoloDotNet, enabling GPU-accelerated inference on NVIDIA hardware using ONNX Runtime. This execution provider supports CUDA for general GPU acceleration and optional NVIDIA TensorRT integration for maximum performance, lower latency, and optimized engine execution. It is designed for high-throughput and real-time inference workloads on Windows and Linux systems with supported NVIDIA GPUs. The provider is fully compatible with the YoloDotNet core library and follows the new modular, execution-provider-agnostic architecture introduced in YoloDotNet v4.0.
YoloDotNet OpenVINO Execution Provider enables optimized inference using Intel® OpenVINO™ on supported Intel CPUs, integrated GPUs, and accelerators. This execution provider integrates ONNX Runtime with Intel OpenVINO to deliver high-performance, low-latency inference on Intel hardware across Windows and Linux. It is ideal for CPU-focused deployments, edge systems, and environments where Intel hardware acceleration is preferred over CUDA-based solutions. The provider is fully modular and designed to work with the execution-provider-agnostic YoloDotNet core library introduced in v4.0. Only one execution provider should be referenced per project.
YoloDotNet.ExecutionProvider.DirectML enables hardware-accelerated inference on Windows using Microsoft’s DirectML framework. This execution provider integrates ONNX Runtime’s DirectML backend and runs on top of DirectX 12, allowing inference to be accelerated on a wide range of GPUs using the Windows graphics driver stack. DirectML is a Windows-only technology and is supported on Windows 10 and Windows 11 with a DirectX 12–capable GPU. No vendor-specific SDKs or external runtimes are required beyond the standard Windows graphics drivers. This makes the DirectML execution provider a low-friction option for GPU acceleration on Windows systems. Designed for YoloDotNet’s modular, execution-provider-agnostic architecture, the DirectML provider integrates cleanly with the core library and exposes explicit, predictable inference behavior. It is well-suited for Windows applications that require GPU acceleration without locking into a specific hardware vendor or proprietary ML stack.