Network Deployment

It is one thing to train a model and another to deploy it and optimize. After you are done training a model, you want it to be as fast as possible during inference.

We go from .pt ONNX TensorRT

TensorRT seems to be platform specific


Task 1 for WATonomous

First task: Look into ONNX for yolov5 and see how it performs on inference.

  • YoloV5 allows you to do inference using a pretrained model. ONNX is just a kind of format that they use, which is faster

Rosbags are found in /mnt/wato-drive/rosbags/year5/, this basically feeds through the Rosbridge

To compare speed, run ROS rostopic list rostopic hz <name>


python --weights --include engine --device 0


python --weights yolov5s.onnx --source dog.png

Big file: VRAM use nvidia-smi

To look into: Deployment here