Update COCO val mAP scores for DEIMv2 large (obj365 pretrained) model
to reflect latest evaluation results: AP 56.8->57.2, AP50 74.6->74.9,
AP75 62.0->62.2, AP_small 38.8->39.6, AP_medium 62.2->62.5.
- Add DEIMv2 (Ultralytics) and DEIMv2 (Ultralytics, obj365) l-size benchmark entries
- Rename DEIMv2 to DEIMv2 (paper) to distinguish paper results from Ultralytics-trained
- Reorder RF-DETR (TopK) entries to appear before DEIMv2 paper results
- Add MODEL_STYLES entries for renamed and new DEIMv2 variants
- Add benchmark entry for DINOv3-RTDETR (s) pretrained on Object365 with 52.3 mAP
- Register model style for "DINOv3-RTDETR (obj365)" in benchmark plot
- Change default Y-axis metric from ap_large to ap (mAP50-95)
- Add DINOv3-STA-RTDETR benchmark entries (l3 and l6 variants)
- Add DEIMv2 benchmark entries (pico, n, s, m, l, x variants)
- Correct YOLO26-RTDETR l variant latency from 8.8 to 8.6 ms
- Register DINOv3-STA-RTDETR marker style for plot rendering
Merge benchmark_plot_cpu.py into benchmark_plot.py and delete the now-redundant
separate file. The unified script supports all hardware targets (T4, M5 CPU/CoreML,
Xeon, Jetson AGX Thor/Orin, Jetson Orin Nano Super) and a selectable Y-axis metric
(ap, ap50, ap75, ap_small, ap_medium, ap_large) via a --metric CLI flag.
- Replace hardcoded per-series globals with a BENCHMARKS dict keyed by target name
- Add MODEL_STYLES dict for per-model marker and label-offset configuration
- Support optional latency error bars (4-tuple data points) in plot_series
- Support rich metric dicts alongside legacy scalar mAP values
- Auto-generate output filename based on active BENCHMARK and metric selection