Machine Learning Inference Library
2.2k 2026-04-18
pykeio/ort
A high-performance Rust interface for hardware-accelerated machine learning inference and training with ONNX models, leveraging ONNX Runtime and pure-Rust backends.
Core Features
Hardware-accelerated ML inference and training.
Robust Rust interface for ONNX Runtime.
Supports ONNX models from various ML frameworks (PyTorch, TensorFlow, Keras, etc.).
Lightweight and efficient for on-device deployment.
Extensible with support for other pure-Rust runtimes.
Detailed Introduction
`ort` is a high-performance Rust library designed for efficient machine learning inference and training, specifically for models in the Open Neural Network Exchange (ONNX) format. It provides a robust Rust interface to Microsoft's ONNX Runtime, enabling hardware-accelerated computations across diverse devices. The project also supports other pure-Rust runtimes, offering flexibility. This makes `ort` an ideal solution for deploying models from popular frameworks like PyTorch, TensorFlow, and Keras, ensuring speed and efficiency for both datacenter and edge deployments.