ultralytics/docs/en/reference/nn/backends/tensorrt.md
Jing Qiu b10fa7be23
ultralytics 8.4.23 Refactor AutoBackend into modular per-backend classes (#23790)
Signed-off-by: Jing Qiu <61612323+Laughing-q@users.noreply.github.com>
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
Co-authored-by: Ultralytics Assistant <135830346+UltralyticsAssistant@users.noreply.github.com>
Co-authored-by: Lakshantha Dissanayake <lakshantha@ultralytics.com>
Co-authored-by: Onuralp SEZER <onuralp@ultralytics.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
2026-03-17 00:39:27 +01:00

796 B

description keywords
Explore TensorRTBackend for high-performance GPU inference with NVIDIA TensorRT, optimizing YOLO models for production deployment. Ultralytics, TensorRTBackend, TensorRT inference, NVIDIA TensorRT, GPU inference, .engine models, production deployment, deep learning

Reference for ultralytics/nn/backends/tensorrt.py

!!! success "Improvements"

This page is sourced from [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/nn/backends/tensorrt.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/nn/backends/tensorrt.py). Have an improvement or example to add? Open a [Pull Request](https://docs.ultralytics.com/help/contributing/) — thank you! 🙏

::: ultralytics.nn.backends.tensorrt.TensorRTBackend