266 packages tagged with “gpt”
The easiest way to use the Ollama API in .NET
LLamaSharp is a cross-platform library to run 🦙LLaMA/Mtmd model (and others) in your local device. Based on [llama.cpp](https://github.com/ggerganov/llama.cpp), inference with LLamaSharp is efficient on both CPU and GPU. With the higher-level APIs and RAG support, it's convenient to deploy LLM (Large Language Model) in your application with LLamaSharp.
LLamaSharp.Backend.Cpu is a backend for LLamaSharp to use with Cpu only.
OpenAI GPT utils, e.g. GPT3 Tokenizer
LLamaSharp.Backend.Cuda12 is a backend for LLamaSharp to use with Cuda12.
FoundationaLLM.Common is a .NET library that the FoundationaLLM.Client.Core and FoundationaLLM.Client.Management client libraries share as a common dependency.
LLamaSharp.Backend.Cuda12.Windows contains the Windows binaries for LLamaSharp with Cuda12 support.
The integration of LLamaSharp and Microsoft semantic-kernel.
LLamaSharp.Backend.Cuda12.Linux contains the Linux binaries for LLamaSharp with Cuda12 support.
FoundationaLLM.Configuration is a .NET library for the Configuration resource provider. The resource provider provides a consistent way to manage and access configuration settings across a FoundationaLLM instance.
The integration of LLamaSharp and Microsoft kernel-memory. It could make it easy to support document search for LLamaSharp model inference.
Cuda12 backend for LM-Kit.NET (Windows)
FoundationaLLM.Client.Management is a .NET library that simplifies integrating the FoundationaLLM Management API into your projects and data workflows.
FoundationaLLM.Client.Core is a .NET library that simplifies integrating the FoundationaLLM Core API into your projects and data workflows.
Cuda12 backend for LM-Kit.NET (Linux)
FoundationaLLM.DataPipelinePlugin contains the standard Data Pipeline plugins provided by FoundationaLLM.
A library for slicing text data into smaller chunks while attempting to preserve context.
FoundationaLLM.DataSource is a .NET library for the DataSource resource provider. The resource provider manages data sources across a FoundationaLLM instance.
LLamaSharp.Backend.Cuda11 is a backend for LLamaSharp to use with Cuda11.
FoundationaLLM.Plugin is a .NET library for the Plugin resource provider. The resource provider manages plugins across a FoundationaLLM instance.
FoundationaLLM.Prompt is a .NET library for the Prompt resource provider. The resource provider manages prompts across a FoundationaLLM instance.
Open AI Chat Completion Models (GPT 3.5/GPT 4) BPE Tokenizer unofficial implementation
.NET wrapper of HuggingFace Tokenizers library
Auto-generated .NET bindings for Llama.cpp.
Native(Rust) wrapper of HuggingFace Tokenizers library.
Powerful layer on top of OpenAI supporting chaining and recursion scenarios. Includes fluent SDK for chaining, templates, and text processing. Still very early and .NET 10 only; Docs and broader support to follow.
A simple light-weight library that wraps the ChatGPT API. Includes support for dependency injection.
LLamaSharp.Backend.Cuda11.Windows contains the Windows binaries for LLamaSharp with Cuda11 support.
LLamaSharp.Backend.Cuda11.Linux contains the Linux binaries for LLamaSharp with Cuda11 support.
Enterprise-Grade .NET SDK for Integrating Generative AI Capabilities.