Cross-platform audio engine library for desktop platforms (Windows, Linux, macOS)
$ dotnet add package OwnAudioSharpOwnAudio is a cross-platform C# audio library that provides a high-level API for audio playback, recording, and processing. By default, it uses FFmpeg for audio decoding and PortAudio for audio I/O. If FFmpeg or PortAudio is not installed, it automatically substitutes the missing one with MiniAudio. This way, it can work without any external dependencies using MiniAudio. The implementation of MiniAudio also allowed the API to be used on mobile platforms.
Check out the sample application OwnAudioSharpDemo that demonstrates the capabilities of the OwnAudioSharp audio library through an Avalonia MVVM application using ReactiveUI. MainWindowViewModel.cs contains the core logic for audio processing, playback, effects application, and UI control.
The table below summarizes the supported operating systems, the APIs used, and their testing status.
| System | APIs | Status |
|---|---|---|
| Windows | PortAudio 2, MiniAudio, FFmpeg 6 | Tested |
| Linux | PortAudio 2, MiniAudio, FFmpeg 6 | Tested |
| macOS | PortAudio 2, MiniAudio, FFmpeg 6 | Tested |
| Android | MiniAudio | Not tested |
| iOS | MiniAudio | Not tested |
The library will attempt to find these dependencies in standard system locations but also supports specifying custom paths.
You can add this library to your project via NuGet or by directly referencing the project.
NuGet\Install-Package OwnAudioSharpBy default, our code includes MiniAudio, which is ready to use for all systems, so you can get started right away!
If you want to use PortAudio and FFmpeg on certain platforms for extended functionality, you can configure them as follows:
Grab the FFmpeg 6 files and extract them to a folder.
Copy the PortAudio 2 DLL file to the same folder.
When you initialize OwnAudio in your code, just point to the folder path.
portaudio19-dev (this usually provides PortAudio v2) and ffmpeg (version 6 or compatible).sudo apt update
sudo apt install portaudio19-dev ffmpeg(Note: Package names may vary slightly depending on your Linux distribution. Make sure you get libraries compatible with FFmpeg version 6.)
OwnAudio is smart and will automatically find and use them if they are installed systemwide.brew install portaudio
brew install ffmpeg@6If you find this project helpful, consider buying me a coffee!
Here's a quick example of how to use OwnAudio to play an audio file:
using Ownaudio;
using Ownaudio.Sources;
using System;
using System.Threading;
// Initialize OwnAudio
OwnAudio.Initialize();
// Create a source manager
var sourceManager = SourceManager.Instance;
// Add an audio file
await sourceManager.AddOutputSource("path/to/audio.mp3");
// Play the audio
sourceManager.Play();
// Wait for the audio to finish
Console.WriteLine("Press any key to stop playback...");
Console.ReadKey();
// Stop playback and clean up
sourceManager.Stop();
OwnAudio.Free();// Add multiple audio files
await sourceManager.AddOutputSource("path/to/audio1.mp3");
await sourceManager.AddOutputSource("path/to/audio2.mp3");
// Adjust volume for individual sources
sourceManager.SetVolume(0, 0.8f); // 80% volume for first source
sourceManager.SetVolume(1, 0.6f); // 60% volume for second source
// Play mixed audio
sourceManager.Play();// Add an input source
await sourceManager.AddInputSource();
// Start recording
sourceManager.Play("output.wav", 16); // 16-bit recording// Change tempo without affecting pitch (value range -20 to +20)
sourceManager.SetTempo(0, 10.0); // Speed up by 10%
// Change pitch without affecting tempo (value range -6 to +6 semitones)
sourceManager.SetPitch(0, 2.0); // Raise pitch by 2 semitones// Seek to a specific position
sourceManager.Seek(TimeSpan.FromSeconds(30)); // Seek to 30 secondsOwnAudio includes a comprehensive effects library:
// Apply reverb effect
var reverb = new Reverb(0.5f, 0.3f, 0.4f, 0.7f);
sourceManager.CustomSampleProcessor = reverb;
// Apply delay effect
var delay = new Delay(500, 0.4f, 0.3f, 44100);
sourceManager.CustomSampleProcessor = delay;
// Apply compressor
var compressor = new Compressor(0.5f, 4.0f, 100f, 200f, 1.0f, 44100f);
sourceManager.CustomSampleProcessor = compressor;
// Apply equalizer
var equalizer = new Equalizer(44100);
equalizer.SetBandGain(0, 100f, 1.4f, 3.0f); // Boost bass
sourceManager.CustomSampleProcessor = equalizer;You can implement custom audio processing by implementing the SampleProcessorBase class:
public class MyAudioProcessor : SampleProcessorBase
{
public override void Process(Span<float> samples)
{
// Process audio samples
for (int i = 0; i < samples.Length; i++)
{
// Example: Simple gain adjustment
samples[i] *= 0.5f; // 50% volume
}
}
public override void Reset()
{
// Reset internal state if needed
}
}
// Apply the processor to source manager
var processor = new MyAudioProcessor();
sourceManager.CustomSampleProcessor = processor;OwnAudio supports real-time audio sources for live audio generation and streaming:
// Add a real-time source
var realtimeSource = sourceManager.AddRealTimeSource(1.0f, 2); // Volume 1.0, stereo
// Submit audio samples in real-time
float[] samples = new float[1024]; // Your generated audio data
realtimeSource.SubmitSamples(samples);The SourceSound class enables real-time audio streaming, perfect for:
// Create a real-time audio source
var liveSource = sourceManager.AddRealTimeSource(1.0f, 2); // Volume, channels
// Example: Generate and stream sine wave in real-time
Task.Run(async () =>
{
int sampleRate = 44100;
int frequency = 440; // A4 note
float amplitude = 0.3f;
int samplesPerBuffer = 1024;
double phase = 0;
double phaseIncrement = 2.0 * Math.PI * frequency / sampleRate;
while (liveSource.State != SourceState.Idle)
{
float[] buffer = new float[samplesPerBuffer * 2]; // Stereo
for (int i = 0; i < samplesPerBuffer; i++)
{
float sample = (float)(Math.Sin(phase) * amplitude);
buffer[i * 2] = sample; // Left channel
buffer[i * 2 + 1] = sample; // Right channel
phase += phaseIncrement;
if (phase >= 2.0 * Math.PI)
phase -= 2.0 * Math.PI;
}
// Submit samples for real-time playback
liveSource.SubmitSamples(buffer);
// Control timing for smooth playback
await Task.Delay(10);
}
});
// Start playback
sourceManager.Play();// Example: Receive audio data from network and play in real-time
var networkSource = sourceManager.AddRealTimeSource(1.0f, 2);
// Network audio receiver (pseudo-code)
networkClient.OnAudioDataReceived += (audioData) =>
{
// Convert received network data to float array
float[] samples = ConvertBytesToFloats(audioData);
// Submit to real-time source for immediate playback
networkSource.SubmitSamples(samples);
};
sourceManager.Play();public class AudioGenerator
{
private SourceSound _source;
private int _sampleRate;
private bool _isGenerating;
public AudioGenerator(SourceManager manager, int sampleRate = 44100)
{
_sampleRate = sampleRate;
_source = manager.AddRealTimeSource(1.0f, 2);
}
public void StartGeneration()
{
_isGenerating = true;
Task.Run(async () =>
{
while (_isGenerating)
{
float[] audioBuffer = GenerateAudio(1024);
_source.SubmitSamples(audioBuffer);
await Task.Delay(5); // Smooth streaming
}
});
}
public void StopGeneration()
{
_isGenerating = false;
}
private float[] GenerateAudio(int samples)
{
// Your custom audio generation logic here
float[] buffer = new float[samples * 2]; // Stereo
// Fill buffer with generated audio data
for (int i = 0; i < samples; i++)
{
float sample = GenerateSample(); // Your generation method
buffer[i * 2] = sample; // Left
buffer[i * 2 + 1] = sample; // Right
}
return buffer;
}
private float GenerateSample()
{
// Implement your audio generation algorithm
return 0.0f;
}
}
// Usage
var generator = new AudioGenerator(sourceManager);
generator.StartGeneration();
sourceManager.Play();// Load source audio data into a byte array
byte[] audioByte = sourceManager.Sources[0].GetByteAudioData(TimeSpan.Zero);
// Load source audio data into a float array
float[] audioFloat = sourceManager.Sources[0].GetFloatAudioData(TimeSpan.Zero);A flexible, resource-efficient audio waveform visualization component for Avalonia applications.
The following example demonstrates how to use the WaveAvaloniaDisplay component in an Avalonia application:
<Window xmlns="https://github.com/avaloniaui"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:audio="using:Ownaudio.Utilities"
x:Class="MyAudioApp.MainWindow"
Title="Audio Visualizer" Height="450" Width="800">
<Grid>
<audio:WaveAvaloniaDisplay x:Name="waveformDisplay"
WaveformBrush="DodgerBlue"
PlaybackPositionBrush="Red"
VerticalScale="1.0"
DisplayStyle="MinMax"/>
</Grid>
</Window>// Set audio data from existing float array
waveformDisplay.SetAudioData(SourceManager.Instance.Sources[0].GetFloatAudioData(TimeSpan.Zero));
// Handle playback position changes
waveformDisplay.PlaybackPositionChanged += OnPlaybackPositionChanged;
// Load directly from audio file
waveformDisplay.LoadFromAudioFile("audio.mp3");
// Load with specific decoder preference
waveformDisplay.LoadFromAudioFile("audio.mp3", preferFFmpeg: true);
// Asynchronous loading
await waveformDisplay.LoadFromAudioFileAsync("large_audio.wav");
// Loading from stream
using var fileStream = File.OpenRead("audio.mp3");
waveformDisplay.LoadFromAudioStream(fileStream);| Property | Type | Description |
|---|---|---|
| WaveformBrush | IBrush | The color of the waveform |
| PlaybackPositionBrush | IBrush | The color of the playback position indicator |
| VerticalScale | double | Vertical scaling of the waveform (1.0 = original size) |
| DisplayStyle | WaveformDisplayStyle | The waveform display style (MinMax, Positive, RMS) |
| ZoomFactor | double | Zoom factor (1.0 = full view, larger values = more detailed view) |
| ScrollOffset | double | Horizontal scroll position (0.0 - 1.0) |
| PlaybackPosition | double | Current playback position (0.0 - 1.0) |
| Event | Parameter | Description |
|---|---|---|
| PlaybackPositionChanged | double | Triggered when the user changes the playback position |
The library follows a layered architecture:
You can configure the audio engine with specific parameters:
// Configure output engine options
SourceManager.OutputEngineOptions = new AudioEngineOutputOptions(
OwnAudioEngine.EngineChannels.Stereo,
44100,
0.02 // Low latency
);
// Configure input engine options
SourceManager.InputEngineOptions = new AudioEngineInputOptions(
OwnAudioEngine.EngineChannels.Mono,
44100,
0.02 // Low latency
);
// Set frames per buffer
SourceManager.EngineFramesPerBuffer = 512;Special thanks to the creators of the following repositories, whose code was instrumental in the development of OwnAudio: