57 packages tagged with “inference”
Execution engine for the NRules rules engine.
Canonical rules model to define rules for the NRules rules engine.
Fluent DSL for authoring rules in C# with the NRules rules engine.
NRules is an open source rules engine for .NET that is based on the Rete matching algorithm with rules authored in C# using internal DSL.
Adds support for basic inferencing from an RDF Schema/Ontology
A library which provides a full SPIN implementation using dotNetRDF's Leviathan SPARQL engine
Infer.NET is a framework for running Bayesian inference in graphical models. It can also be used for probabilistic programming. This package contains classes and methods needed to execute the inference code.
Infer.NET is a framework for running Bayesian inference in graphical models. It can also be used for probabilistic programming. This package contains the Infer.NET Compiler, which takes model descriptions written using the Infer.NET API and converts them into inference code.
Infer.NET is a framework for running Bayesian inference in graphical models. It can also be used for probabilistic programming. This package contains complete machine learning applications including a classifier and a recommender system.
A lightweight, linq-friendly inference library for .NET
Rhino is Picovoice's Speech-to-Intent engine. It directly infers intent from spoken commands within a given context of interest, in real-time. For example, given a spoken command "Can I have a small double-shot espresso?", Rhino infers that the user wants to order a drink with these specifications: { "type": "espresso", "size": "small", "numberOfShots": "2" } Rhino is: - using deep neural networks trained in real-world environments. - compact and computationally-efficient, making it perfect for IoT. - cross-platform. It is implemented in fixed-point ANSI C. Raspberry Pi (Zero, 3, 4, 5), Android, iOS, Linux (x86_64), Mac (x86_64), Windows (x86_64, arm64), and web browsers are supported. Furthermore, Support for various ARM Cortex-A microprocessors and ARM Cortex-M microcontrollers is available for enterprise customers. - self-service. Developers and UX designers can train custom models using Picovoice Console.
Microsoft Machine Learning Scoring library for deep learning model inference. Current version of the library supports inferencing on ONNX v1.3 and TensorFlow v1.10.0 models. The library supports CPU execution with MKL/MKLDNN acceleration. Also supports CUDA GPU devices. For CPU execution of ONNX models, no extra libraries are required. However for scoring TensorFlow models, the CUDA libraries are needed for both CPU and GPU execution. Download and install CUDA 9.2 toolkit, CUDNN and device drivers separately. This package provides a .Net standard 1.3 compatible module for maximum portability. Currently supported platforms include x64 CPU on Windows OS only.
Infer.NET is a framework for running Bayesian inference in graphical models. It can also be used for probabilistic programming. This package contains visualization tools for exploring and analyzing models on Windows platform.
AI abstractions and deterministic mock implementations for intent inference.
A library for providing tagging suggestions for documents, music, photos, etc.
Picovoice is an end-to-end platform for building voice products on your terms. It enables creating voice experiences similar to Alexa and Google. But it entirely runs 100% on-device. Picovoice is: - Private: Everything is processed offline. Intrinsically HIPAA and GDPR compliant. - Reliable: Runs without needing constant connectivity. - Zero Latency: Edge-first architecture eliminates unpredictable network delay. - Accurate: Resilient to noise and reverberation. It outperforms cloud-based alternatives by wide margins *. - Cross-Platform: Design once, deploy anywhere. Build using familiar languages and frameworks.
Sometimes when developing generic methods extensions it needs additional generic parameter types, and when any of types can not be inferred then all generic parameter types must be specified. This library brings IOutParam<out T>, IInParam<in T> and ITypeParam<T> interfaces and corresponding factories to help developers create method parameters that lets the compiler infer the type of the generic parameter.
Auxiliary type inference without writing types that are not necessary.
HuggingFace Inference API provider for HPD-Agent
Lightweight and friendly .NET library for working with OWL2 ontologies
Azure AI Inference provider for HPD-Agent
Microsoft Foundry Local Core native library - Native AOT compiled multi-platform AI inference library
Annotate [Choice] to transform your type into a highly performant and flexible disjoint union.
OWLSharp extensions for working with LinkedData ontologies and vocabularies (SKOS, GEOSPARQL, OWL-TIME)
Very basic knowledge base implementations that use the models defined by the SCFirstOrderLogic package.
Microsoft Foundry Local Core WinML native library - Native AOT compiled Windows AI inference library with WinML support
Local intent classification using ONNX Runtime. Implements IIntentModel for offline or low-latency inference.
Shared contracts and protobuf definitions for InferenceGateway gRPC services.