mirror of
https://github.com/ultralytics/ultralytics
synced 2026-04-30 03:48:39 +00:00
Signed-off-by: Onuralp SEZER <onuralp@ultralytics.com> Co-authored-by: UltralyticsAssistant <web@ultralytics.com> Co-authored-by: Lakshantha Dissanayake <lakshantha@ultralytics.com> Co-authored-by: Jing Qiu <61612323+Laughing-q@users.noreply.github.com> Co-authored-by: Laughing-q <1185102784@qq.com>
25 lines
1.2 KiB
Markdown
25 lines
1.2 KiB
Markdown
---
|
|
description: TensorRT engine export utilities for converting ONNX models to optimized TensorRT engines. Provides functions for ONNX export from PyTorch models and TensorRT engine generation with support for FP16/INT8 quantization, dynamic shapes, DLA acceleration, and INT8 calibration for NVIDIA GPU inference optimization.
|
|
keywords: Ultralytics, TensorRT export, ONNX export, PyTorch to ONNX, quantization, FP16, INT8, dynamic shapes, DLA acceleration, GPU inference, model optimization, calibration, NVIDIA, inference engine, model export
|
|
---
|
|
|
|
# Reference for `ultralytics/utils/export/engine.py`
|
|
|
|
!!! success "Improvements"
|
|
|
|
This page is sourced from [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/utils/export/engine.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/utils/export/engine.py). Have an improvement or example to add? Open a [Pull Request](https://docs.ultralytics.com/help/contributing/) — thank you! 🙏
|
|
|
|
<br>
|
|
|
|
## ::: ultralytics.utils.export.engine.best_onnx_opset
|
|
|
|
<br><br><hr><br>
|
|
|
|
## ::: ultralytics.utils.export.engine.torch2onnx
|
|
|
|
<br><br><hr><br>
|
|
|
|
## ::: ultralytics.utils.export.engine.onnx2engine
|
|
|
|
<br><br>
|