ultralytics/docs/en/reference/utils/export/mnn.md
Onuralp SEZER b73dce8813
refactor: split Exporter export methods into per-format utility modules (#23914)
Signed-off-by: Onuralp SEZER <onuralp@ultralytics.com>
Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
Co-authored-by: Lakshantha Dissanayake <lakshantha@ultralytics.com>
Co-authored-by: Jing Qiu <61612323+Laughing-q@users.noreply.github.com>
Co-authored-by: Laughing-q <1185102784@qq.com>
2026-03-27 18:38:56 +08:00

883 B

description keywords
MNN export utilities for converting ONNX models to MNN format for efficient inference on mobile and embedded devices. Supports FP16 and INT8 weight quantization for optimized deployment using Alibaba's MNN framework. Ultralytics, MNN, model export, ONNX to MNN, Alibaba MNN, mobile deployment, embedded systems, FP16, INT8 quantization, lightweight inference, edge deployment

Reference for ultralytics/utils/export/mnn.py

!!! success "Improvements"

This page is sourced from [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/utils/export/mnn.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/utils/export/mnn.py). Have an improvement or example to add? Open a [Pull Request](https://docs.ultralytics.com/help/contributing/) — thank you! 🙏

::: ultralytics.utils.export.mnn.onnx2mnn