Whisper.net.Runtime.Cuda.Linux contains the native runtime libraries to enable Whisper on .NET with Whisper.net and GPU support (Cuda) for Linux
$ dotnet add package Whisper.net.Runtime.Cuda.LinuxOpen-Source Whisper.net
Dotnet bindings for OpenAI Whisper made possible by whisper.cpp
| Build type | Build Status |
|---|---|
| CI Status (Native + dotnet) |
To install Whisper.net with all the available runtimes, run the following command in the Package Manager Console:
PM> Install-Package Whisper.net.AllRuntimes
Or add a package reference in your .csproj file:
<PackageReference Include="Whisper.net.AllRuntimes" Version="1.9.0" />
Whisper.net is the main package that contains the core functionality but does not include any runtimes. Whisper.net.AllRuntimes includes all available runtimes for Whisper.net, including both CUDA 13 (Whisper.net.Runtime.Cuda) and CUDA 12 (Whisper.net.Runtime.Cuda12) GPU builds.
To install a specific runtime, you can install them individually and combine them as needed. For example, to install the CPU runtime, add the following package references:
<PackageReference Include="Whisper.net" Version="1.9.0" />
<PackageReference Include="Whisper.net.Runtime" Version="1.9.0" />
We also have a custom-built GPT inside ChatGPT, which can help you with information based on this code, previous issues, and releases. Available here.
Please try to ask it before publishing a new question here, as it can help you a lot faster.
Whisper.net comes with multiple runtimes to support different platforms and hardware acceleration. Below are the available runtimes:
The default runtime that uses the CPU for inference. It is available on all platforms and does not require any additional dependencies.
libstdc++6, glibc 2.31Whisper.net.Runtime.NoAvx runtime instead.For CPUs that do not support AVX instructions.
libstdc++6, glibc 2.31Contains the native whisper.cpp library with NVidia CUDA support enabled (built with the CUDA 13 toolchain).
Contains the native whisper.cpp library with NVidia CUDA support enabled, built against the CUDA 12 toolchain for systems that only provide CUDA 12.x drivers.
Contains the native whisper.cpp library with Apple CoreML support enabled.
Contains the native whisper.cpp library with Intel OpenVino support enabled.
Contains the native whisper.cpp library with Vulkan support enabled.
You can install and use multiple runtimes in the same project. The runtime will be automatically selected based on the platform you are running the application on and the availability of the native runtime.
The following order of priority will be used by default:
Whisper.net.Runtime.Cuda (NVidia devices with CUDA 13 drivers installed)Whisper.net.Runtime.Cuda12 (NVidia devices with CUDA 12 drivers installed)Whisper.net.Runtime.Vulkan (Windows x64 with Vulkan installed)Whisper.net.Runtime.CoreML (Apple devices)Whisper.net.Runtime.OpenVino (Intel devices)Whisper.net.Runtime (CPU inference)Whisper.net.Runtime.NoAvx (CPU inference without AVX support)The loader automatically probes the CUDA runtimes in this order and validates the installed driver via cudaRuntimeGetVersion, so machines with only CUDA 12 drivers will transparently fall back to Whisper.net.Runtime.Cuda12.
To change the order or force a specific runtime, set the RuntimeLibraryOrder on the RuntimeOptions:
RuntimeOptions.RuntimeLibraryOrder =
[
RuntimeLibrary.CoreML,
RuntimeLibrary.OpenVino,
RuntimeLibrary.Cuda,
RuntimeLibrary.Cuda12,
RuntimeLibrary.Cpu
];
Whisper.net follows semantic versioning.
Starting from version 1.8.0, Whisper.net does not follow the same versioning scheme as whisper.cpp, which creates releases based on specific commits in their master branch (e.g., b2254, b2255).
To track the whisper.cpp version used in a specific Whisper.net release, you can check the whisper.cpp submodule. The commit hash for the tag associated with the release will indicate the corresponding whisper.cpp version.
Whisper.net uses Ggml models to perform speech recognition and translation. You can find more about Ggml models here.
For easier integration, Whisper.net provides a Downloader using Hugging Face.
var modelName = "ggml-base.bin";
if (!File.Exists(modelName))
{
using var modelStream = await WhisperGgmlDownloader.Default.GetGgmlModelAsync(GgmlType.Base);
using var fileWriter = File.OpenWrite(modelName);
await modelStream.CopyToAsync(fileWriter);
}
export HF_TOKEN=hf_xxx$env:HF_TOKEN = "hf_xxx"using var whisperFactory = WhisperFactory.FromPath("ggml-base.bin");
using var processor = whisperFactory.CreateBuilder()
.WithLanguage("auto")
.Build();
using var fileStream = File.OpenRead(wavFileName);
await foreach (var result in processor.ProcessAsync(fileStream))
{
Console.WriteLine($"{result.Start}->{result.End}: {result.Text}");
}
You can find the documentation and code samples here.
For instructions on running the test suites locally (including required .NET SDKs, optional environment variables like HF_TOKEN), see tests/README.md.
MIT License. See LICENSE for details.