⚠ Switch to EXCALIDRAW VIEW in the MORE OPTIONS menu of this document. ⚠

Text Elements

LiDAR Object Detection

Pre-trained YOLOv8 on COCO

Traffic Light Detection

/augmented_camera_detections

Camera Object Detection

Detection3DArray

LiDAR

(pedestrians, cyclists, cars, traffic light)

Camera

World Modelling

Runs PointPillars

annotate traffic lights with color using classical CV techniques

Fine-Tuned YOLOv8

Traffic Sign Detection

(traffic stop, speed limit, etc.)

(coco.yaml)

2D to 3D association

/traffic_signs

/annotated_traffic_lights

/camera_detections

/traffic_lights

(excludes traffic lights)

(pedestrians, cyclists, cars, traffic light)

Fuse all the detections here, figure out 3D coordinates of 2D bounding boxes.

Semantic Segmentation

TBD

/lidar_detections

(Non, yellow, red, green)

Ontario Traffic Signs

We can probably just focus on the main ones (stop sign, school zone, speed limits). Hard to collect data. HD Map should have this data anyways.

To figure out which area of the road is actually drivable

Localization

Use the novatel, potentially SLAM in the future

Detection3DArray

Detection2DArray

Detected Classes

(text)

For visualized topic, append the _viz suffix

*Consider detections_2d and detections_3d??

Object Tracking

/3d_tracked_detections