mirror of
https://github.com/ultralytics/ultralytics
synced 2026-04-30 03:48:39 +00:00
Signed-off-by: Onuralp SEZER <onuralp@ultralytics.com> Co-authored-by: UltralyticsAssistant <web@ultralytics.com> Co-authored-by: Lakshantha Dissanayake <lakshantha@ultralytics.com> Co-authored-by: Jing Qiu <61612323+Laughing-q@users.noreply.github.com> Co-authored-by: Laughing-q <1185102784@qq.com>
883 B
883 B
| description | keywords |
|---|---|
| MNN export utilities for converting ONNX models to MNN format for efficient inference on mobile and embedded devices. Supports FP16 and INT8 weight quantization for optimized deployment using Alibaba's MNN framework. | Ultralytics, MNN, model export, ONNX to MNN, Alibaba MNN, mobile deployment, embedded systems, FP16, INT8 quantization, lightweight inference, edge deployment |
Reference for ultralytics/utils/export/mnn.py
!!! success "Improvements"
This page is sourced from [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/utils/export/mnn.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/utils/export/mnn.py). Have an improvement or example to add? Open a [Pull Request](https://docs.ultralytics.com/help/contributing/) — thank you! 🙏