-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Description
Env
-GPU: Jetson Nano
-OS: Ubuntu 20.04 (JetPack 5.1.2)
-CUDA version: 11.4
-TensorRT version: 8.5.2.2
About this repo
-Branch: master
-Model: YOLOv8 (custom-trained 5-class model, converted to WTS via gen_wts.py)
Problem
I am using the YOLOv8 repository to build a YOLOv8-small engine. The engine is created successfully following the provided instructions, but when I run inference on a video using yolov8_det_trt.py, I get the following warning:
DeprecationWarning: Use network created with NetworkDefinitionCreationFlag::EXPLICIT_BATCH flag instead.
I have tried modifying the engine creation process to include the EXPLICIT_BATCH flag during network creation, but when I do so, I get build errors or the engine fails to compile.
My questions:
-
How can I correctly build the YOLOv8 engine with EXPLICIT_BATCH enabled?
-
Is there a supported way to update the C++ code to support explicit batch mode without breaking the engine creation?
-
Is there a way to do inference with the engine created using implicit batch (like it is created with the instructions)?