.NET Wrapper of Paddle Inference library, no native binding, please also install native runtime binding libraries like Sdcb.PaddleInference.runtimes.*
$ dotnet add package Sdcb.PaddleInferenceEnglish | 简体中文
💗 .NET Wrapper for PaddleInference C API, support Windows(x64) 💻, NVIDIA Cuda 10.2+ based GPU 🎮 and Linux(Ubuntu-22.04 x64) 🐧, currently contained following main components:
text_image_orientation_infer model to detect text picture's rotation angle(0, 90, 180, 270).PaddleNLP Lac Chinese segmenter model, supports tagging/customized words.ONNX model using C#.Please checkout this page 📄.
| NuGet Package 💼 | Version 📌 | Description 📚 |
|---|---|---|
| Sdcb.PaddleInference | Paddle Inference C API .NET binding ⚙️ |
Package Selection Guide:
Sdcb.PaddleInference.runtime.win64.mkl for most users. It offers the best balance between performance and package size. Please note that this package does not support GPU acceleration, making it suitable for most general scenarios.openblas-noavx is tailored for older CPUs that do not support the AVX2 instruction set.Important:
Not all GPU packages are suitable for every card. Please refer to the following GPU-to-sm suffix mapping:
sm Suffix | Supported GPU Series |
|---|---|
| sm61 | GTX 10 Series |
| sm75 | RTX 20 Series (and GTX 16xx series such as GTX 1660) |
| sm86 | RTX 30 Series |
| sm89 | RTX 40 Series |
| sm120 | RTX 50 Series (supported by CUDA 12.9 only) |
Linux OS packages(preview):
| Package | Version 📌 | Description |
|---|---|---|
| Sdcb.PaddleInference.runtime.linux-loongarch64 | Loongnix GCC 8.2 Loongarch64 | |
| Sdcb.PaddleInference.runtime.linux64.mkl.gcc82 | Linux-x64 GCC 8.2(tested in Ubuntu 22.04) |
Be aware, as the Linux operating system cannot modify the value of LD_LIBRARY_PATH at runtime. If dependent dynamic libraries (such as libcommon.so) are loaded before the main dynamic library (such as libpaddle_inference_c.so), and also due to protobuf errors reported: https://github.com/PaddlePaddle/Paddle/issues/62670
Therefore, all NuGet packages for Linux operating systems are in a preview state, and I'm unable to resolve this issue. Currently, if you are using the NuGet package on Linux, you need to manually specify the LD_LIBRARY_PATH environment variable before running the program, using the following commands:
For x64 CPUs:
export LD_LIBRARY_PATH=/<program directory>/bin/Debug/net8.0/runtimes/linux-x64/native:$LD_LIBRARY_PATH
For Loongson 5000 or above CPUs (linux-loongarch64):
export LD_LIBRARY_PATH=/<program directory>/bin/Debug/net8.0/runtimes/linux-loongarch64/native:$LD_LIBRARY_PATH
Some of packages already deprecated(Version <= 2.5.0):
Any other packages that starts with Sdcb.PaddleInference.runtime might deprecated.
Baidu packages were downloaded from here: https://www.paddlepaddle.org.cn/inference/master/guides/install/download_lib.html#windows
All Windows packages were compiled manually by me.
Baidu official GPU packages are too large(>1.5GB) to publish to nuget.org, there is a limitation of 250MB when upload to Github, there is some related issues to this:
But You're good to build your own GPU nuget package using 01-build-native.linq 🛠️.
Mkldnn - PaddleDevice.Mkldnn()
Based on Mkldnn, generally fast
Openblas - PaddleDevice.Openblas()
Based on openblas, slower, but dependencies file smaller and consume lesser memory
Onnx - PaddleDevice.Onnx()
Based on onnxruntime, is also pretty fast and consume less memory
Gpu - PaddleDevice.Gpu()
Much faster but relies on NVIDIA GPU and CUDA
If you wants to use GPU, you should refer to FAQ How to enable GPU? section, CUDA/cuDNN/TensorRT need to be installed manually.
Please ensure the latest Visual C++ Redistributable was installed in Windows (typically it should automatically installed if you have Visual Studio installed) 🛠️
Otherwise, it will fail with the following error (Windows only):
DllNotFoundException: Unable to load DLL 'paddle_inference_c' or one of its dependencies (0x8007007E)
If it's Unable to load DLL OpenCvSharpExtern.dll or one of its dependencies, then most likely the Media Foundation is not installed in the Windows Server 2012 R2 machine: <img width="830" alt="image" src="https://user-images.githubusercontent.com/1317141/193706883-6a71ea83-65d9-448b-afee-2d25660430a1.png">
Many old CPUs do not support AVX instructions, please ensure your CPU supports AVX, or download the x64-noavx-openblas DLLs and disable Mkldnn: PaddleDevice.Openblas() 🚀
If you're using Win7-x64, and your CPU does support AVX2, then you might also need to extract the following 3 DLLs into C:\Windows\System32 folder to make it run: 💾
You can download these 3 DLLs here: win7-x64-onnxruntime-missing-dlls.zip ⬇️
Enable GPU support can significantly improve the throughput and lower the CPU usage. 🚀
Steps to use GPU in Windows:
Sdcb.PaddleInference.runtime.win64.cu120* instead of Sdcb.PaddleInference.runtime.win64.mkl, do not install both. 📦PATH or LD_LIBRARY_PATH (Linux) 🔧PATH or LD_LIBRARY_PATH (Linux) 🛠️PATH or LD_LIBRARY_PATH (Linux) ⚙️You can refer to this blog page for GPU in Windows: 关于PaddleSharp GPU使用 常见问题记录 📝
If you're using Linux, you need to compile your own OpenCvSharp4 environment following the docker build scripts and the CUDA/cuDNN/TensorRT configuration tasks. 🐧
After these steps are completed, you can try specifying PaddleDevice.Gpu() in the paddle device configuration parameter, then enjoy the performance boost! 🎉
QQ group of C#/.NET computer vision technical communication (C#/.NET计算机视觉技术交流群): 579060605
